The Evolution of APIs From The Cloud Age and Beyond

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

The Evolution of APIs: From the Cloud Age and Beyond

ebook

The Evolution of APIs:


From the Cloud Age
and Beyond
APIs in the Cloud-Native World of Microservices and Kubernetes

1
The Evolution of APIs: From the Cloud Age and Beyond

Content

Modern APIs of the Cloud Age 3


RESTful APIs 3
JSON 6
OpenAPI 6
gRPC 7
GraphQL 8
Loosely Coupled APIs 8
Message Queues 9
API Middleware 10
Fine-grained APIs 10
Microservices 12
Serverless Functions 13
Platform Independence 14
Service Mesh 15
The API Economy 16
The Future of APIs: What’s Next? 17
The Emergence of Web 3.0 17
Event-driven APIs 18
Blockchain 20
Artificial Intelligence 20
IoT 21
Preparing for the Future 22
API Development Practices 22
API Linting 22
APIOps 22
Enhancing API Performance 23
Securing APIs 23
Environment Independence 24
Final Words 25

2
The Evolution of APIs: From the Cloud Age and Beyond

In this Part Two of our series on the evolution of APIs, we pick up where
we left off from Part One— The Evolution of APIs: From RPC to SOAP
and XML— considering how the shape and development of APIs has
changed as we enter the cloud age of the internet.

Modern APIs of the Cloud Age

In 2006, Amazon Web Service (AWS) emerged as a cloud


service provider, effectively kicking off the cloud age of digital
transformation. Other cloud providers have since joined the fray,
with Microsoft Azure and Google Cloud Platform (GCP) composing
the market majority alongside AWS. Many smaller players have
emerged as well.

Cloud computing has revolutionized API development and


deployment. Cloud vendors offer various cloud services through
API endpoints. These API endpoints can be accessed through the
browser, command-line interface (CLI) tools, and SDKs.

At the end of Part One in this series, we saw how web APIs
communicate via SOAP/XML over the HTTP protocol.

RESTful APIs

Initially introduced in 2000, Representational State Transfer (REST)


came onto the scene as a new, lightweight API architecture. As
developers found SOAP APIs challenging to understand and
cumbersome to use, this led to the broader adoption of REST in
subsequent years.

APIs that adhere to REST are known as RESTful APIs. Although


communication still occurs over HTTP, REST has significant
differences from SOAP.

Unlike SOAP, REST is not a protocol but rather a set of architectural


principles that the interface of the API must observe. So, an API can
use SOAP and XML while still being RESTful. To be RESTful, an API
must adhere to the following requirements:

3
The Evolution of APIs: From the Cloud Age and Beyond

• Uniform Interface: Each API resource is identifiable by a resource


URI that the client can call. The API response message must have
enough information to tell the client how to process the message
and the additional actions that can be performed. The response
must have enough information for the client to use to modify or
delete the resource on the server. The client only needs the URI of
the API, and the API response should have enough information for
the client to build hyperlinks for accessing other resources.

• Client-server: A RESTful API must follow a client-server


architecture. The client requests a resource over HTTP. The server
provides that resource. The client and the server are independent
of each other. The client does not know or care about where or
how the server stores its data, and the server is not concerned
about the client’s business logic.

• Stateless: The API server will not host any session or state details
about a client request. As far as the API is concerned, each
request is new. This is the default behavior of HTTP requests. The
responsibility for managing the state information lies entirely with
the client.

For example, suppose an API endpoint requires authentication.


In that case, the server will not remember if the client making a
request also made another request a minute ago. The server will
still ask for authentication, and the client has to provide it.

Keeping APIs stateless means they don’t have to be concerned


with persisting information, making them loosely coupled.
However, statelessness also means the client needs to send more
context information, which may increase network latency.

• Cacheable: The information the client requests does not


necessarily need to come fresh from the server. The client
can cache a response from the API and continue using that
response as long as it’s valid. The client can check the local
client cache, proxy server cache, or other caching servers for
the requested data before sending a new request to the API
URL. To ensure the validity of the cached data, the API server
must add the Expires HTTP header information in its response
body. Using this information, the client can decide if the cached
data is valid or stale.

4
The Evolution of APIs: From the Cloud Age and Beyond

• Layered system: The interaction between the client and the server
can be enabled through other auxiliary services.

Figure: Layered system

As shown in the diagram above, the client does not know if it


is directly connected to the API server or if it is going through
multiple layers of applications.

• Code-on-demand (optional): This feature is optional, but it


allows a REST API endpoint to return application code (such as
JavaScript) to the client.

RESTful APIs support several message formats:

• Plaintext
• HTML
• YAML
• XML
• JSON

In contrast, SOAP only supports XML. Although multiple formats are


supported, RESTful APIs predominantly use JSON.

5
The Evolution of APIs: From the Cloud Age and Beyond

JSON

JavaScript Object Notation (JSON) is a data format that’s simpler


than XML. JSON documents are a series of sections with key-value
pairs. Here is a JSON representation of the XML document we saw
in Part One:

[
{
"firstname": "John",
"lastname": "doe",
"email": "john.doe@email.com",
"phone": "111-222-3333"
},
{
"firstname": "Jane",
"lastname": "Bloggs",
"email": "jblogg@anotheremail.com",
"phone": "222-333-4444"
}
]

OpenAPI

OpenAPI is a formal specification for how to define the structure


and syntax of a RESTful API. This interface-describing document
is both human and machine-readable, which makes it portable.
OpenAPI has enjoyed wide adoption by the API developer
community.

An OpenAPI document has three major parts:

1. The API endpoints and the HTTP methods to access them. The
OpenAPI document specifies the input parameters for each
method and the possible HTTP response codes.
2. Reusable components within the API.
3. API metadata such as the title, version, description, and
other information.

6
The Evolution of APIs: From the Cloud Age and Beyond

Embracing the OpenAPI specification offers many benefits for the


API developer. They include:

• Increased collaboration between development teams. Teams can


easily search, find, and use APIs that have been built by others.
• Automated application development by code-generators
• Help with automated test case generation

gRPC
While RESTful APIs typically use HTTP as a communication protocol, the
newer version of the protocol, HTTP/2, came out in 2015. At the same
time, Google introduced another framework for APIs, gRPC, which used
HTTP/2.

In Part One, we discussed how Remote Procedure Call (RPC) was one
of the earliest means of communication between applications running
on remote machines. gRPC is a framework for creating RPC-based
APIs. gRPC is based on RPC but takes it a step further by adding
interoperability with HTTP/2.

Since it’s based on RPC, a gRPC client can directly call a service method
on a gRPC server—whether the server is remote or local. The gRPC
server implements the service interface, consisting of its methods and
parameters and the returned data types, and answers client calls.

Instead of using JSON or XML, gRPC uses protocol buffers (protobuf)


as its Interface Definition Language. With protobuf, a developer first
defines the data structure of the message and then compiles the
protobuf to create the data access classes.

Some benefits of gRPC include:

• Smaller message size: gRPC messages are up to 30% smaller in


size than JSON messages.
• Faster communication: gRPC communication is up to eight times
faster than JSON+HTTP1/1 speed.
• Streaming connection: Unlike simple request/response in REST,
gRPC can use client-side, server-side, and bidirectional streaming

7
The Evolution of APIs: From the Cloud Age and Beyond

GraphQL
Originally developed by Facebook for its mobile applications in 2012 and
open sourced in 2015, GraphQL is a query language and runtime that
allows users to query APIs to return the exact data they need. Clients
make multiple calls for data with different parameters appended to the
URL in a typical RESTful API. In contrast, GraphQL allows developers to
create queries that can fetch all (and only) the data needed from multiple
sources in a single call.

Developers create GraphQL schemas to include all the data fields an API
query can ask for. When a client application queries a GraphQL API, the
query is matched against the schema. If the fields requested exist and
match, then the API returns the result.

Loosely Coupled APIs


API developers soon realized that APIs would be more reusable if they
were more independent. In other words, APIs had to become standalone
pieces of software that didn’t depend on the rest of the application’s
functionality. With this realization, APIs continued evolving toward
loosely coupled design.

A loosely coupled component ensures API services can be redesigned,


rewritten, and redeployed without running the risk of breaking other
services. There are several ways to make an API service loosely coupled.
These include message queues, middleware, and fine-grained design.

Message Queues

Message queues are software components that sit between two


applications and help one application communicate with the other
asynchronously. Contrast this, for example, with synchronous
communication between services:

• API A sends a message directly to API B.


• API A waits for the target to respond, but API B is busy or down.
• API A is blocked until API B responds.

This is a tightly coupled design.

8
The Evolution of APIs: From the Cloud Age and Beyond

To alleviate this, API A can send its message to a message queue for
API B and then continue with its work. API B can poll the message
queue periodically to see if there are any messages for it. When it
finds the message from API A, it fetches the message, performs the
requested function, and returns the result. To make API B loosely
coupled, it can also use another queue and send the result there.

Figure: Two Loosely Coupled APIs Using Message Queues

Message queues are an integral part of modern, complex software


engineering. There are many tools available, including Apache Kafka,
AWS SQS, and Google Pub/Sub.

9
The Evolution of APIs: From the Cloud Age and Beyond

API Middleware

In a loosely coupled design, the integration between APIs can be


delegated to a middleware. The middleware ensures the APIs can talk
to one another by:

• Providing connectivity logic


• Mapping and translating between message formats
• Translating between protocols
• Orchestrating multiple instances of APIs
• Authenticating and authorizing operations
• Keeping state data
• Managing the security audit

This design ensures that changes to the API integration logic only
need to happen in one place and that the APIs themselves can be
managed independently.

Fine-grained APIs

Fine-grained APIs also help achieve loose coupling. In a coarse-


grained application, application functionality spreads across only a
few APIs. Each API takes care of multiple functionalities. Although
this approach is an improvement from large monolithic application
architecture, dependency issues remain. Coarse-grained APIs are not
very easy to manage or deploy.

Instead, these APIs can be broken down further, with each


subsequently smaller API exposing fewer functionalities. Granted, as
APIs become finer-grained, the number of APIs increases. However,
smaller APIs become easier to develop, test, manage, deploy, and
upgrade. An API would perform only a single function or return a
specific value in an ultimate fine-grained model. At that point, an
enterprise application might have hundreds or even thousands of APIs.

To illustrate the concept, let’s consider an ERP application. The


monolithic application can be initially broken down into modules like
sales, marketing, finance, CRM, and human resources. Implementing
each of these modules as separate APIs would leave APIs that are
still too large. Instead, each module can be broken down into different
coarse-grained APIs. For example, the finance module can be
broken down into APIs called accountsPayable, accountsReceivable,
reportGeneration, taxReturn, and others.

10
The Evolution of APIs: From the Cloud Age and Beyond

Accounts payable is a major function that contains sub-functions,


including checking a balance, making a payment, updating a ledger,
and running reconciliation. Each of these functions can be created as
separate, fine-grained APIs. These fine-grained APIs can be further
broken down into more individual, standalone APIs, and so on.

Figure: Coarse-grained and Fine-grained APIs of a Monolithic Application

As organizations fully embraced the API paradigm, they began


refactoring—and often rewriting—complex applications. The result
of that effort brought the shift toward microservices development.

11
The Evolution of APIs: From the Cloud Age and Beyond

Microservices
Microservices are both the name of a specific type of software
implementation and a software design architecture. In short,
microservices allow a complex application to be broken down into
several small, independent “services.” These services are nothing more
than software programs—much like the subroutines, functions, or web
service APIs we have covered thus far.

An example of a microservice can be a user registration process that


captures user details from a web form and saves them into a database.
Another microservice could be responsible for sending an email to
verify the user’s identity. Another microservice could handle the user
login process, while yet another handles forgotten password requests.

The functionalities within microservices are exposed as APIs. So, an


API within a microservice can call another API that it’s bundled with.
Similarly, one microservice can call the API of another microservice.
Sometimes, the APIs may call other publicly available APIs.

Figure: Example of a Microservice Architecture

12
The Evolution of APIs: From the Cloud Age and Beyond

What makes microservices unique is that they are loosely coupled


and independent. In other words, you can change the program
code and internal workings of an API within a microservice without
changing a single line of code in other APIs within that same
microservice. Like APIs, microservices can be written in any language
and deployed anywhere.

Typically, a small team of developers is responsible for creating


and maintaining each different microservice. They develop the
microservice, test its functionality independently, and then deploy it
when it’s ready.

One of the major reasons behind microservice success is the wide


adoption of continuous integration (CI) and continuous deployment
(CD) by development teams. With CI, teams can work on new
features or bug fixes for one microservice and merge the code into
the main branch of the version control system, independent of other
microservices within that same repository. The merge can trigger a
new build of the software and perhaps even a new deployment to the
production environment.

Because microservices are loosely coupled, one microservice


experiencing a spike in load will not affect other microservices.
The performance of the whole application is minimally affected. All
that the underlying infrastructure needs to do is scale accordingly.
Similarly, if a microservice fails, this won’t bring down the whole
application—the blast radius is minimized. The failing microservice
alone is inaccessible. This allows development teams to pinpoint
their efforts on fixing the specific issue instead of searching for the
root cause across the whole application.

Serverless Functions

Cloud providers now offer a newer way to host microservices. Instead


of customers working on provisioning their servers, configuring
networking, and creating scaling logic for their microservices, they
can use serverless functions.

A serverless function is a standalone piece of code that a cloud


provider runs on its managed environment. The customer (the
developer) does not have to spin up any hardware or worry about load
balancing, scaling, or garbage collection—the cloud service does that.

13
The Evolution of APIs: From the Cloud Age and Beyond

Behind the scenes, the code does run on some server somewhere, but
those details are opaque to the developer. As far as the developer is
concerned, there’s no server involved—it’s serverless.

These types of managed services are now known as Function-


as-a-Service (FaaS). Some examples of such services are
AWS Lambda, Azure Functions, and Google Cloud Functions.
Serverless function services are an excellent choice for creating
and hosting microservices.

Platform Independence
At the beginning of the cloud age, there was an early rush to port
everything to the cloud. That impulse has now given way to more
mature organizational thinking. Many larger enterprises are deciding
to keep their on-premise network, often adopting a hybrid-cloud or
multi-cloud model. With the multi-cloud model, an organization has
its IT assets spread across a private cloud and multiple public cloud
tenancies.

This sprawl means microservices and their APIs are now also multi-
tenant. An application can span across the data center and multiple
public clouds. Services could be running on physical machines, virtual
servers both on-premise and in the cloud, Docker containers running in
Kubernetes pods, or as serverless entities.

As parts of an application ecosystem, microservices need to


communicate with one another. For example, the user registration
service needs to call the email verification microservice before saving
the details in a database. This communication can happen using event
streaming, message brokers, or REST APIs.

These services can connect using direct links, VPNs, and trusted
virtual private clouds (VPCs) at a physical level. However, there is also
overhead. For example:

• The network that glues all the services together may be slow or
have unpredictable performance.
• Even when services are loosely coupled, they still need
a reliable way to communicate. Building highly available,
intelligent routing mechanisms for service communication is a
complex undertaking.

14
The Evolution of APIs: From the Cloud Age and Beyond

• Ensuring a consistent network security model across public and


private clouds can be a nightmare.

Fortunately, the service mesh addresses these concerns.

Service Mesh
A service mesh is a dedicated infrastructure layer built into an
application to enable its microservices to communicate using proxies.
A service mesh takes the service-to-service communication logic from
the microservice’s code and moves it to its network proxy.

The proxy runs in the same infrastructure layer as the service and
handles the message routing to other services. This proxy is often called
a sidecar because it runs side-by-side with service. The interconnected
sidecar proxies from many microservices create the mesh.

Figure: A Service Mesh Running on Hybrid Cloud

15
The Evolution of APIs: From the Cloud Age and Beyond

The service mesh offers many advantages to microservices:

• Observability: The logs and metrics from a mesh can quickly show
if the services are communicating without any issues.
• Secure connection: Risk is minimized as proxies connect securely.
• Automated failover: The mesh automates retries and backoffs for
failed services.

The API Economy


We will finish this chapter by introducing another concept the cloud
age has helped create: the API economy. The API economy is not a
technology or an architectural pattern. Instead, it refers to the business
practice of organizations exposing their digital services or information
through the controlled use of APIs.

As we saw at the beginning, such exposure allows companies to


monetize their data and services. This can subsequently lead to
increased market share and improved customer reach. For example,
a fast-food provider organization can use Uber Eats to deliver food to
customers’ doorsteps. Uber’s public APIs make this possible. Uber itself
makes use of other public APIs like Google Maps.

A great example of the API economy is travel sites that allow you to
search, compare, book, and pay for travel arrangements. There are
thousands of operators in the travel industry, and many offer their
information through APIs. Travel sites use these APIs to get their data,
but the end-user is unaware of it.

Some of the benefits of the API economy include:

• Drive for innovation: Companies are trying new ways to reach


customers faster. Innovative use of APIs helps them achieve it.
• Reduced time to market: Companies don’t have to wait to build
applications. There are various public APIs available for every
conceivable operation. Using such APIs as building blocks allows
developers to create prototypes and MVPs at a rapid pace.
• Increased value: Organizations add value for their customers,
suppliers, partners, and other external stakeholders by making
their data and services readily available through APIs. It gives
stakeholders a self-service option.

However, the API economy also raises concerns about security and
performance, which we will see in the next and final chapter.

16
The Evolution of APIs: From the Cloud Age and Beyond

The Future of APIs: What’s Next?


Over the last ten years, APIs have played a significant role in the growth
of Fintech, Artificial Intelligence (AI), Blockchain, Internet-of-Things (IoT),
and cybersecurity. In this chapter, we will explore the future possibilities
of APIs and how their development and integration might continue to
shape the evolution of these technologies.

The Emergence of Web 3.0


The mainstream adoption of the World Wide Web in the mid-90s saw
the birth of Web 1.0. During this time, web applications were challenging
to design and implement. Websites were static, and they didn’t have
the interactivity required to help visitors be anything more than mere
information consumers. So, while organizations were busy showcasing
their products and services in this new medium, consumer participation
was largely lacking.

The prevalence of accessible web applications shifted this dynamic


as social media and video content sharing made their entry. The birth
of Web 2.0 in the late early years of this century led to more people
sharing content and information and applications becoming more
interactive. This was catalyzed by innovation in mobile internet and cloud
computing. The web we know today is Web 2.0, and it continues to forge
ahead. However, there is a third wave that is about to make its entry.

Web 3.0 (originally called the “Semantic Web” by Tim Berners-Lee) is the
Web that will dominate tomorrow. Web 3.0 has yet to receive a precise
definition, but most people agree on some standard features.

It is envisioned that Web 3.0 will be based on decentralized networks


and protocols. What this means is that a website will never go down—
because a chain of servers will serve it. We have already begun to see
this decentralization in blockchain technology. In Web 3.0, blockchain will
become ubiquitous, with no single authority of an entity over information.

The web we know today will become more accessible than ever. Today,
the web is mostly used through computers and mobile devices. In Web
3.0, the internet will be accessible by smart devices that are internet-
ready and can search, consume, process, and share information just like
phones and laptops do today.

17
The Evolution of APIs: From the Cloud Age and Beyond

The interaction and exchange of information between humans and


computers will become seamless. Searching, consuming, and sharing
information and content will be based on semantics (meanings) rather
than exact keywords. To enable semantics, Web 3.0 will be heavily
dependent on artificial intelligence (AI) to understand and predict the
intentions of humans.

Another feature will be video giving way to virtual reality (VR) and
augmented reality (AR) applications. Everything from surgery to
education will be affected by the rich 3D graphics of these technologies.

This leaves us with the following question: How will APIs look in Web
3.0? The answer is: Most likely, APIs will be event-driven.

Event-driven APIs
The traditional approach to consuming an API has been request-response.
Applications send an API query, and the API sends back a result.

Figure: Traditional API Access with Request and Response

In Web 3.0, human-computer interaction will be more autonomous, with


event-driven APIs sending data when an event occurs. With event-driven
APIs, a consumer subscribes to an API endpoint, indicating that it wishes
to receive updates asynchronously when particular events happen.
When a matching event happens, the API sends the event’s data to all
subscribed consumers.

18
The Evolution of APIs: From the Cloud Age and Beyond

Event-driven APIs are often called streaming APIs, push APIs, or


asynchronous APIs. We already see this type of API in our daily lives.
Whenever we call an Uber, our app keeps sending us messages based on
the driver’s position. When we use social media, we get notified as soon
as our connections post content.

Figure: Event-Driven API with Event Push

There are several approaches to building event-driven APIs, and they


include:

• Webhooks to publicly available callback URLs for sending


event data
• The WebSocket protocol, which allows bidirectional communication
between the API provider and the consumer
• Server-sent events (SSE) with notifications sent to a URL on which
the API consumer listens

19
The Evolution of APIs: From the Cloud Age and Beyond

Blockchain
Thanks to well-known cryptocurrencies like Bitcoin or Ethereum,
blockchain technology is increasingly being used for other
distributed transaction-based workloads. Blockchain is a distributed
ledger of transactions based on trust and verification. Each
transaction in the blockchain is immutable and is open to all nodes
in the network.

APIs are making a huge impact in smart contracts, which are


applications stored in a blockchain that run to enforce the
agreement of a transaction in real time. The agreement can be
anything from exchanging information to e-commerce purchases.

Once the contract is enforced, the transaction is made permanent


in the blockchain. If the smart contract needs to access data and
functionality outside the blockchain, it uses a blockchain oracle to
query, verify, and authenticate external data sources. Unless it is
also distributed, a blockchain oracle can be a single point of failure.
This is where smart APIs could be a possibility—APIs running within
the blockchain.

Artificial Intelligence
Artificial Intelligence (AI) allows human-programmed applications
to observe and learn from event data patterns, making autonomous
decisions for the program’s execution path based on that observation.
Cloud providers now offer several advanced AI-enabled APIs for
developing cognitive applications. Examples of these services
include natural language processing (NLP), face recognition, and
video analysis. Using these services’ APIs significantly reduces
development time as the cloud product does most of the heavy lifting.

AI will also play a role in API development and adoption. For


example, AI might enable users with limited technical skills to
discover organizational data sources and build integrations between
systems. AI can be used for automated documentation of APIs and
for monitoring of API security threats or optimization opportunities.

20
The Evolution of APIs: From the Cloud Age and Beyond

IoT
Internet-of-Things (IoT) are physical devices connected over a
network that can create, collect, and share data with other devices
over the internet.

APIs for IoT are the glue that sits between heterogeneous devices
and the applications that use them. Typical examples of IoT APIs can
be seen in app-controlled devices like personal fitness trackers, lights,
alarms, and more. These devices capture information and send that
over the internet to the manufacturer’s backend system.

The availability of IoT APIs depends on the maturity level of the


product. Some IoT devices already have well-established APIs. The
popular Google Assistant can be used for voice recognition and
control of other smart devices, enabling developers to create enriched
applications for music, travel, personal assistance, and more.

As you can see, these are exciting times. There is an explosion of


data and functionality like never before in the history of computing.
This brings us to our final question: How can the development
community prepare for writing the next generation of APIs?

21
The Evolution of APIs: From the Cloud Age and Beyond

Preparing for the Future


How ought the development community prepare for the next big change
in the world of APIs? Holistically, APIs of tomorrow need to be:

• Easy to develop, test, and deploy


• Adaptive to newer technologies
• Able to take advantage of automation
• Deployable in a wide range of infrastructure
• Fault-tolerant and always-on
• Performant under extreme load
• Secure and compliant with industry standards

API Development Practices


With tools now available to help developers design, build, test, and publish
APIs, organizations are vastly improving their development practices. Two
particular areas seeing impact are API Linting and APIOps.

API Linting

Unlike traditional code linters that highlight errors or format code,


API linters ensure specifications conform to specific standards
and constraints. One such example is Insomnia, which validates
a specification as you write. Using API linters also ensures
organizational APIs are easy to manage, port, and upgrade. Public
APIs can be made more interoperable when they are linted against
the same guidelines.

APIOps

APIOps teams use DevOps and GitOps principles in the context


of the API development lifecycle, building automation into the API
development workflow. Traditionally, the DevOps team managed
infrastructure and code deployment automation, while the developers
focused on building features. An APIOps developer has both
responsibilities: creating the interface and automating the deployment.

22
The Evolution of APIs: From the Cloud Age and Beyond

Enhancing API Performance


In a future with more internet-enabled devices and human-application
interaction, Web 3.0 will create major performance demand on existing
and new public APIs.

Enterprises, therefore, need to ensure their APIs can stand up to


the increased demand. Some of the methods to make APIs more
responsive include:

• Caching responses to frequently run queries


• Using connection pooling for load balancing
• Using newer methods of querying (like GraphQL) to limit the size
of the API response and to return only requested data
• Compressing response data
• Writing logs asynchronously
• Processing incoming requests asynchronously or in batches
• Using auto-scalable, self-healing infrastructure
• Employing high-speed message serializers

Securing APIs
Like any computing resource, APIs are also the target of malicious
attacks, and their potential vulnerabilities include:

• Using shadow code (third-party code that has not been checked
for security threats)
• Using out-of-date code libraries
• Using poor firewall or API gateway configurations

The Open Web Application Security Project (OWASP) has an API


Security Project that lists some of the top security issues for APIs.
Some of the ways APIs can be made more secure include:

• Using zero-trust policy and a layered security approach


• Deploying APIs behind an API Gateway
• Validating input parameter values for injection attacks
• Throttling and limiting an API client’s request frequency
• Encrypting data at rest and in transition
• Regularly running API testing tools to identify vulnerabilities

23
The Evolution of APIs: From the Cloud Age and Beyond

The list is not exhaustive, but the main takeaway here is this: securing
APIs isn’t just one team’s responsibility. Everyone involved in the
design, development, testing, and deployment should be part of the
security initiative.

Environment Independence
Despite the wide adoption of cloud technologies and containerized
applications, traditional infrastructure isn’t going away any time soon.
Organizations will be running their workloads in physical servers,
multiple clouds, virtual machines, containers, and even serverless
platforms—side-by-side and for some time to come.

This means that APIs have to be adaptable for all of these


environments. Ideally, an API developed on a developer workstation VM
should run seamlessly when it’s ported to a Kubernetes cluster.

The network performance and security of hosting environments


may vary widely, resulting in poor, unpredictable API responses. As
we saw, APIs can be made loosely coupled and asynchronous, but
they can still fail without a robust and intelligent communication and
routing mechanism. Service meshes like Kong Mesh can make this
communication easier.

24
The Evolution of APIs: From the Cloud Age and Beyond

Final Words
We have been on a journey. In this second part of our two-part
eBook tracing the evolution of APIs, we’ve looked in particular
at APIs during this modern cloud age. We considered RESTful
APIs and the recent explosion of microservices development. As
today’s software applications have evolved to comprise a mesh of
globally distributed APIs, the need for robust, secure, and seamless
connectivity is apparent.

In our journey, we continued on to consider the future of APIs with


event-driven APIs and the impact of blockchain, AI, and IoT. Lastly,
we looked at how the developer community today can prepare for
building and deploying the APIs of tomorrow.

We hope that this eBook has given you perspective about the history
of APIs and how they have changed in shape over the years. If
you are developing APIs, you can consider the extensive suite of
application solutions from Kong, a Gartner Magic Quadrant Leader
for Full Lifecycle API Management. To learn more, talk to one of our
API experts.

25
Konghq.com

Kong Inc.
contact@konghq.com

150 Spear Street, Suite 1600


San Francisco, CA 94105
USA

You might also like