Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Polylith

Polylith
Polylith is a software architecture that applies functional thinking at the system scale.
It helps us build simple, maintainable, testable, and scalable backend systems.

Introduction
Polylith is a software architechture that decouples the backend code into reusable ”LEGO bricks” that can be
reused and shared (mainly) across services while we can still work with all our code as if it was a single
codebase.

It allows us to postpone decisions on how to execute the code in production, and easily change how to run
it, e.g. by increasing the number of services, without affecting existing bricks.

Each deployable artifact is just a config file that specifies the set of bricks we want to include.

Sponsoring

Please support the work with Polylith and the poly tool here!

Bronze Sponsors

Learn Polylith

There are several ways of learning Polylith:

Try it out yourself

To learn how to build your first Polylith system, head over to the poly tool documentation where you can
install it and create a workspace and follow the examples using the powerful language Clojure. If you prefer
Python, you can find an early version of that tool here.

H i' di d i
How it's used in production

Watch when Sean Corfield explains how he uses Polylith and the poly tool at World Singles Network.

Look at working code

Go and have a look at these systems:

The RealWorld example app written by Furkan Bayraktar

The User Manager example app written by Sean Corfield

The Polylith project written by Joakim Tengstrand

A Game of life app written by Joakim Tengstrand

Watch a video

Get a high level introduction to Polylith with these two videos:

Polylith in a Nutshell

10-minute overview, by James Trunk


Polylith - the last architecture you will ever need by Joakim Tengstrand and Furkan B…
B…

A 39 minutes overview of the Polylith architecture, by Joakim Tengstrand and Furkan Bayraktar

Listen to a podcast

Jacek Schae interviews the Polylith team in the ClojureScript podcast:

S4 E21 - Polylith with Joakim, James and Furkan (Part 1)


S4 E22 - Polylith with Joakim, James and Furkan (Part 2)

Slack

Come and chat with us and other Polylith users in Slack.

Read a blog post

The Monorepo/Polylith Series - Sean Corfield writes about his Polylith journey
A fresh take on monorepos in Python - David Vujic introduces Polylith in Python
The Polylith architecture - How Polylith came to life, by Joakim Tengstrand
The origin of complexity - The foundational concepts that Polylith is built upon, by Joakim Tengstrand

Production systems

Enter the Matrix and have a look at different production systems.


Read the documentation

If you prefer reading documentation, then you’re already in exactly the right place!

Note that Polylith documentation is split into two parts:

1. This high-level documentation, which describes how Polylith works and the problems it solves. It tries to
remain language agnostic, but does use Clojure in the code examples.

2. The poly tool documentation, which describes how to work with a Polylith codebase in Clojure.

Content:

Polylith - what is Polylith?


Polylith in a nutshell - walkthrough of the building blocks of Polylith.
Workspace - were we put everything.

Component - our composable building block (brick).


Base - building block (brick) that exposes a public API
Project - deployable artifact made of a set of bricks
Development project - the place where we work with all our bricks
Bring it all together - a short example

Simplicity - how Polylith simplifies the design

poly tool - overview of the poly tool

Current architectures - a walk through of common architectures

Advantages of Polylith - how Polylith differ from other architectures

Transitioning to Polylith - step by step guide on how to transition to Polylith


Production systems - list of companies using Polylith in production

Why the name "Polylith"?


Videos

FAQ - Frequently Asked Questions


Who made this?

What is Polylith?
Polylith is a software architecture that solves some of the fundamental challenges in building backend
systems. Those challenges are:

It's difficult to share our code across teams and services


We lack a shared language for communicating architectural concepts
As our codebases grow, they tend to become a complex mess that is hard to change and test

We try to mimic our complex production environments in our development environment


Our systems take too long to test, build, and deploy

Polylith addresses these challenges by introducing simple, composable, LEGO-like bricks, which can easily
be shared across teams and services. The choice of bricks determines what each artifact does and how it's
exposed.

To make the development experience even more delightful, we've also built a tool which gives instant
creation of the various building blocks, incremental tests (only test the code that's impacted by the last
changes), and project visualization.

What isn't Polylith?

Polylith isn't a framework and does not come with ready to use functionality.
Polylith isn't a library.
Polylith isn't a tool (but has tooling support for Clojure).

What programming languages are Polylith for?

Polylith is language agnostic, and it should be possible to use it in almost any programming language. We
in the Polylith team have only used it with the functional language Clojure so far, but there is nothing
stopping someone from using it in a procedural language like C, or an object oriented language like Java.
Even without tooling support, you will get most of the benefits.
Introduction
Polylith in a Nutshell
Polylith is a software architecture that simplifies our backend services and tools by
enabling us to construct them as “modular monoliths” using composable components.

Here we will introduce the basic building blocks of Polylith.

Function
Functions are the smallest building blocks in Polylith from which everything is created. Most communication
within a Polylith system is done by using simple function calls as a way to connect the different high-level
building blocks that a Polylith system consists of.

The simplicity of functions makes them fantastic building blocks for code:

1. Encapsulation: functions hide their implementation and only expose their signature.
2. Simplicity: functions have a single responsibility and don't mix nouns with verbs, which makes them
fundamentally untangled.

3. Stateless: functions are just code; they don't contain state or instance.
4. Purity: functions can be pure (i.e. have no side-effects) which makes them easy to understand, reuse,
test and parallelise.

These properties make functions (especially pure functions) inherently composable and testable units of
code and a perfect foundation for a software architecture like Polylith.

Library

A library is the kind of library we already know, a chunk of code that is compressed into a versioned file
which can be downloaded from Maven, Clojars or other repositories.

A library is a piece of code that lives in its own namespace which allows us to pick the libraries we want to
use, without getting into name clashes (but sometimes dependency hell!).

We rely on tooling we already use, that hides the complexity of solving dependencies to other libraries and
the caching of files on disk.

Component

Components are high-level building blocks (bricks) which remove the need for layers (horizontal, vertical
slice, or onion) in our architecture.

A component can represent a part of our domain (e.g. cart, invoice, order, user, etc.), or be part of our
infrastructure (e.g. authentication, database, log, etc.), or be an integration point to a third-party system, (e.g.
crm-api, payment-api, sms-api, etc).

Base

A base is a special type of building block (brick) that only exposes its functionality via a public API, e.g.
REST, Lambda, GraphQL, gRPC, command-line, etc.

A base exposes a collection of endpoints via its API, and delegates the implementation of each endpoint to
an appropriate component.

Brick

Brick is the common name for a component or base, which are our building blocks (together with libraries).

Project

A project specifies which libraries and bricks should be included in an artifact, like a service, lambda
function, command line tool, or a new library. This allows for optimal code reuse of components across
multiple projects.

Development project

A development project is the place we use to work with all our libraries, components, and bases. It gives us
a “monolithic development experience” with full code navigation, debugging, and refactoring across our
entire codebase, and the possibility to work with our entire system in a single REPL.

Workspace
A workspace is the place in a Polylith codebase where we store all our building blocks and configure our
projects.

Which challenges does Polylith solve?

Polylith’s single development project gives us a delightful development experience; we get all the
benefits of coding with a monolith (code navigation, debugging, refactoring, and a single REPL) but
maintain the flexibility of deploying our components into any combination of artifacts.
The LEGO-like components simplify the design of our tools and services by giving us building blocks at
the right level of abstraction, which are understandable, composable, reusable, and replaceable.

Components are inherently simple and easy to reason about; they are just code, have a clear interface,
and hide their implementation.
Components maximise code reuse to the point of having zero code duplication across our entire
codebase.
Projects maximise the deployment flexibility by allowing us to combine any set of bricks.

Now, let's dig deeper into the Polylith architecture to better understand how it solves these challenges.
Architecture
Workspace
A workspace is a single place for all of your organizations code and projects.

The workspace is the root directory in a Polylith codebase, and it's where we work with all our building
blocks and projects. A workspace is usually version controlled in a monorepo, and its subdirectories look
like this:

▾ workspace
▸ bases
▸ components
▸ development
▸ projects

We can summarise the main ideas like this. Components encapsulate functionality while the bases decide
how the code should be exposed and executed, e.g. as a command line tool, a lambda function, or a REST
API. This separation allows us to easily change how our code is executed by including an arbitrary set of
bricks in each project, that can later be used to build the artifacts we need. Finally, the development project
is used to improve the development experience.

If you want to see an example of a complete Polylith codebase, go to the RealWorld example app or the
usermanager-example app.
Component
Components are the main building blocks in Polylith.

A component is an encapsulated block of code that can be assembled together with a base (it's often just a
single base) and a set of components and libraries into services, libraries or tools. Components achieve
encapsulation and composability by separating their private implementation from their public interface:

The dark green


represents the
implementation
and the light
green
represents the
interface.

Each component lives in a separate directory under the components directory, and contains a src ,
test and resources directory:

▾ workspace
▾ components
▾ mycomponent
▸ src
▸ test
▸ resources

The src directory often contains at least two namespaces, one for the interface and one for the
implementation:

▾ src
interface.clj
core.clj

A component's interface is a namespace that exposes a collection of functions for other components or
bases to call. Each function in a component’s interface “passes-through” to an equivalent function in its
private implementation (the core namespace in this example). This “pass-through” approach enables full
code navigation and refactoring, whilst maintaining encapsulation. You are allowed to put the
implementation directly in the interface, but most of the time you want to separate the two.

Code examples of components can be found in the RealWorld example app and in the Polylith Tool.
Base
Bases are the building blocks that exposes a public API to the outside world.

A base is an encapsulated block of code that can be assembled together with a set of components and
libraries into services, libraries or tools. Bases achieve encapsulation and composability by separating their
private implementation from their public API:

The dark blue


represents the
implementatio
n and the light
blue
represents the
API.

A base has a "thin" implementation which delegates to components where the business logic is
implemented.

A base has one role and that is to be a bridge between the outside world and the logic that performs the
"real work", our components. Bases don't perform any business logic themselves, they only delegate to
components.

As a result, we can easily add more functionality to a base by either re-using existing components or by
adding new ones. The components are accessed through their interfaces, which allow us to use different
components (for the same interface) in different projects, e.g. development, test, stage and production, which
makes Polylith an incredibly flexible way of organising code.

Each base lives in a separate directory under the bases directory, where it has a src , test and
resources directory:

▾ workspace
▾ bases
▾ mybase
▸ src
▸ test
▸ resources

The src directory usually contains two namespaces, one for the API and one for the "thin" implementation:

▾ src
api.clj
core.clj

Code examples of bases can be found in the RealWorld example app and in the Polylith Tool.
Project
Projects configure Polylith's deployable artifacts.

A project is the result of combining one base (or in rare cases several bases) with multiple components and
libraries.

This project
has one library
(grey), two
components
(green) and
one base
(blue).

The artifacts that are built based on the projects are the end goal of the Polylith architecture, which we
deploy in our test, staging and production environments.

Each project lives in a separate directory under the projects directory, where it has a configuration file:

▾ workspace
▾ projects
▾ myproject
deps.edn

The configuration file lists all included bricks and project level libraries (brick level libraries are implicitly
included):

{:deps {poly/change {:local/root "../../components/change"}


poly/creator {:local/root "../../components/creator"}
poly/file {:local/root "../../components/file"}
poly/git {:local/root "../../components/git"}
poly/help {:local/root "../../components/help"}
poly/lib {:local/root "../../components/lib"}
poly/migrator {:local/root "../../components/migrator"}
poly/shell {:local/root "../../components/shell"}
poly/util {:local/root "../../components/util"}
poly/validator {:local/root "../../components/validator"}
poly/version {:local/root "../../components/version"}
poly/ws-file {:local/root "../../components/ws-file"}
poly/poly-cli {:local/root "../../bases/poly-cli"}

org.clojure/clojure {:mvn/version "1.10.3"}


org.slf4j/slf4j-nop {:mvn/version "1.7.32"}}}
Example projects can be found in the RealWorld example app, the User Manager example app, and in the
Polylith Tool.

Some languages, like Clojure, support having more than one src directory out of the box, or support
including "projects" as in the example above, while other languages may need a plugin, like the build-
helper-maven-plugin.

It's almost like magic, because all we have to do is to list all our building blocks used in e.g. a service, and
everything will automatically "connect" without the need of dependency injection, annotations or any other
"magic"! Here is how that looks like in the User Manager example app.

This is also why the Polylith architecture can be used without tooling support and still give us most of its
benefits.
Development project
The development project is what we open in our editor/IDE and where we work with
our entire codebase.

The development project is where we specify all the components, bases and libraries that we want to work
with:

This development
project has two
libraries (grey), two
components (green)
and two bases (blue).

The development project gives us a delightful development experience that allows us to work with all our
code from one place in a single REPL (if the language of choice has support for it).

The idea is to give us the fastest feedback loop possible and to keep things simple. We also separate
development from production which allows us to optimize development for productivity while production can
be optimized for non-functional requirements, like performance or scalability.

This gives an enormous flexibility and allows us to make decisions in production that don't affect
development, and to postpone decision on how to execute the code in production.

One of the main mistakes in the software industry today is that it conflates development with production in a
way that if we add a service in production it will automatically "turn up" in development as one more project,
because we use the production projects to work with our code.

This makes things more complex than they have to be and accelerates the explosion of complexity. In
Polylith we avoid this problem by isolating development and production from each other.
Bring it all together
Let’s take a look at an example Polylith workspace, so you can start to understand the benefits of dividing
your codebase this way:
We can see that the codebase contains almost 9000 lines of source code and around 7700 lines of test
code (7093 + 619). The first table shows our projects, including the single development environment, and
the second table lists all the components (in green) and bases (in blue) and in which projects they are used
(the s and t flags show that the brick's source and test directories are included in the project).

This codebase is divided into 29 bricks (27 components and 2 bases). That's 310 lines of code per brick on
average. Most bricks are shared across several projects, which are used to build various deployable
artifacts. The dev column represents the single development project. The poly project is used to build the
poly command-line tool from which the diagram above is created. The api project is used to build the
code as a library, and finally, the deployer project is used by the tool itself in the CI build.

9000 lines of code isn't much and it's not uncommon for non-Polylith services, tools, or libraries, to be a lot
bigger. However, the problem with storing something that big as one piece in a single place instead of
dividing it into 29 (or more) smaller pieces, is not only that the code can't be easily reused, but also that it
makes it harder to reason about the code.

In a Polylith codebase, we can make each brick tiny, making it easy to work with the code and allowing us to
focus on one part of the code at a time. The structure also encourages us to divide the bricks into even
smaller pieces, using namespaces. To give an example, this is how the creator component looks:
It's pretty easy to reason about this code. It lives in the components directory, under the creator directory,
indicating that this component can create things. Based on the namespace names, it looks like it can create
components, bases, projects, and workspaces.

When we look at the function names in the component's interface, it turns out that we guessed right! By
giving the namespaces and functions good and explanatory names, and by keeping them small, we lay the
foundation for a healthy codebase that is easy to reason about, compose, and change!
Simplicity
How Polylith helps us fight complexity.

The Polylith team loves simplicity, which is one of the main reason we chose Clojure when implementing
the Polylith Tool and the RealWorld example app. For newcomers, Clojure’s syntax probably looks like a
bunch of weird parentheses, positioned in strange places, for no apparent reason. However, Clojure’s
parentheses solve a real challenge with communicating the structure of code. In the same way, Polylith’s
approach to structuring code may appear alien in the beginning. But every decision has been carefully
weighed by the Polylith team to optimise for simplicity, development speed, and developer delight!

It is no coincidence that a component lives in its own src directory, that it’s just “plain code”, that it has an
interface, that it’s only allowed to depend on interfaces (and libraries) and that it lives in a monorepo. All
those things are there to decouple our system(s) into small Lego-like bricks in a way that they can be shared
and put together in a useful way.

Components are very similar to functions in that way but operate on a higher level. A function is
composable, easy to reason about, has a well-defined interface and is fundamentally simple. A component
is also composable, easy to reason about, has a well-defined interface and is fundamentally simple.

Teasing things apart so that they can be composed together is often a sign of good design. It has been as
hard to convince people that Polylith is a good idea as it has been to convince them that Clojure is. Our
experience though, is when they start using either of them, they will soon discover how simple they are, how
fun they are to work with and how productive you become, and that’s why we love Clojure and Polylith.

To get an idea of the principles Polylith is based on, please read this article where I try to explain where
complexity comes from.

This is how Polylith fights complexity:

1. We use a monorepo that helps us keep the codebase in sync so that we can make coordinated commits
across projects.

2. We have only one of each component and interface, which removes code duplication and maximises
code reuse.
3. A component is "just code" that exposes a set of functions and we avoid the complexity that comes with
mutable state.
4. A component lives in its own place, a separate src directory. The default way of structuring code is
normally to put different functionalities together in one or several places/projects. The result is that each
piece of functionality can't be shared across projects. Polylith solves that problem by letting each
functionality live in its own separate place, and be used everywhere.
5. Components have interfaces and only know of other interfaces. This decouples them from each other
and makes them composable and replaceable.

6. The single development project helps us keep the contracts between the components in sync and stops
us from introducing bidirectional or circular dependencies. In other words, it guarantees that all
dependencies point in the same direction, even across projects.

7. A project specifies what building blocks it contains. There are two things that make this possible. First,
all dependencies point in the same direction. Second, not only frozen code (libraries) can be reused
across projects but also living code in the form of components and bases. This maximises
composability and allows us to pick and choose what to include in each project as if they were LEGO®
bricks.
8. Polylith separates what from how at all levels, from functions and bricks to the workspace. What is
represented by function names and their signatures, component interfaces and base API's, and all the
bricks in the workspace. How is an implementation detail, including how each function and interface is
implemented and how to execute the code in production.

9. The introduction of new high-level concepts and the standardized structure makes it easier to reason
about the code and reduces the cognitive load, which results in reduced complexity.

As you can see, Polylith is not only about productivity but very much about simplicity.
Tool
Overview
The Polylith Tool optimizes the creation, development and testing of Polylith
workspaces.

What is the Polylith Tool?


The Polylith Tool gets us to development nirvana.

It helps us:

create the structure of our workspace, bases, components, projects and development project.
test our codebase incrementally.

vizualize the workspace so we can understand and communicate about our codebase and architecture.

We didn't have the Polylith Tool when we started building our first Polylith workspace. We
manually created the directory structure and built the projects every time we made a change.
Polylith's other advantages still made it a delight to work with, compared to our previous situation
(Microservices).
So don't be afraid to start trying Polylith, even if your language doesn't have its own Polylith Tool
yet.

Head over to the poly tool to learn more about how it works and how to use it.

Conclusion
Now it's time to start wrapping up our journey, so let's discuss what we think makes Polylith so good to work
with.
Conclusion
Current architectures
Let's have a quick look at some of the mainstream software architectures before we compare them to
Polylith later on.

We'll describe our development and production experiences with three mainstream software architectures.
But we'll start by defining our terms:

Software architecture:

The high-level structures of a software system


Fundamental structural choices, which are costly to change once implemented

Monolith:
A software architecture where the code is stored in a single codebase and deployed as a
single artefact

Microservices:
A software architecture consisting of small and independently deployable services

Each service runs in a separate process and communicates with the others across a
network

Serverless:

A software architecture based on a cloud-computing execution model


The cloud provider dynamically manages the allocation of machine resources

Development experience

The traffic lights are a rough summary of our personal experiences. Please take them as our
subjective opinions, not as objective truths.

We split the ratings into "small" and "large", because building larger and more complex systems
usually gave us a different experience.
Our subjective development experiences

Monoliths keep all their code in one place, which gives a friction-free development experience for code
navigation, refactoring, debugging, code reuse, and testability. However, as Monoliths grow, they trend
towards "big balls of mud", which become very difficult to maintain.

Working with a single Microservice is great, as it gives us the same benefits as working with a small
Monolith. However, the more Microservices we maintain, the worse our development experience becomes.
That's because each new service boundary increases the friction for code navigation, refactoring,
debugging, code reuse, and testing.

Serverless architecture is inherently modular and functional, which gives significant advantages for
simplicity and testability. However, the distributed nature of its code execution has a negative impact on our
code reusability, debugging and testing.

Production experience

Our subjective production experiences

Monolith's one artefact approach keeps operation costs low and simplifies deployment, but makes horizontal
scalability difficult.

Microservices' distributed approach gives excellent scalability and robustness, but makes deployment
complex and hosting expensive.

Serverless' "outsourcing" approach gives excellent scalability, reduces our ownership of deployment
complexity, and keeps server costs in-line with our usage, but also gives us an air-tight vendor lock-in.
So what's missing?

These architectures give us plenty of guidance on how to deploy our systems, but very little guidance on
how to structure our code within each system. Over the years, we've tried many different approaches to
improve our systems' internal structures (e.g. DCI, DDD, Design Patterns, SOA, SOLID, Hexagon, etc.), but
none of them took us all the way to development nirvana.

To get there, we realised we needed to roll up our sleeves and invent a whole new approach.
Advantages of Polylith
What makes Polylith so simple, fast and fun to work with?

Let's compare Polylith with the three architectures we looked at earlier.

A Polylith project can be deployed as any artifact, like a service, tool or library. The most common
usage is to deploy them as different kinds of services, which is what we will show in this example.

Development
Polylith's single development project allows us to work with all of our building blocks in one place. This
disconnects our development experience from our chosen deployment architecture.

Untangling development from deployment allows us to delay our deployment decisions until the last
possible moment. This delay allows us to avoid "premature distribution" and keeps our systems as simple
as possible, for as long as possible.
Advantage Explanation

Cost The simplicity, clarity, and maintainability of Polyli


codebases, combined with its frictionless
development experience, greatly increases team
effectiveness which substantially reduces
development cost.

Debugging All the code in a Polylith development project can


run within a single REPL, giving us a first-class
REPL-driven development and debugging
experience.

Fast feedback The Polylith Tool keeps track of which bases and
components have changed since the last stable
point in time and only tests those. This gives us
lightning-fast feedback both in our local
development environment and when we build and
deploy in our Continuous Integration environment

Refactoring Components and bases are connected to


components with simple function calls. This mean
that we can safely refactor the code with our
editor/IDE.

Reusability Components are inherently reusable because the


are encapsulated, stateless and composable. The
can be reused within a single service and across
multiple services (and other artifacts).

Simplicity Polylith building blocks are just code; a collection


of functions behind an interface. Interfaces
guarantee encapsulation, which ensures our
codebase remains untangled, leading to simpler
services.

Testability The encapsulated and functional nature of


components makes them easy to test in isolation,
and as complete services.

Production
Not only does Polylith allow us to delay our deployment decisions, it also allows us to easily change our
deployment architecture, when the need arises. That's because Polylith makes it easy to recombine our
components into any number of services and deploy them to meet our performance needs.
Advantage Explanation

Cost Because Polylith allows us to keep our deployme


architecture as simple as posslible as long as
possible, it reduces costs by reducing the number
of services we need to run in our test, staging, and
production environments.

Deployment The Polylith Tool makes the build and deploymen


process simple and seamless, both locally and in
our Continuous Integration environment.

Scalability When a Polylith service isn't achieving the


performance/scalability we need, then Polylith
makes it easy to create new services and scale ou
system horizontally. We can reuse existing
components within each new service.

Seeing all these green traffic lights for Polylith probably looks too good to be true. The best way to find out is
to try Polylith for yourself!
Transitioning to Polylith
How much will Polylith affect our current codebases and deployment experience?

Migrating to Polylith from a Monolith, Microservices, or Serverless architecture is relatively easy.


That's because we can individually migrate each artefact to a Polylith service without changing
anything else about our deployment.

Let's say that we have twelve Microservices in our current solution. After we complete the initial stage of a
migration to Polylith, then we'll still have twelve Microservices, but each will be a Polylith service.

Let's take a look at the steps involved in transitioning from each type of architecture.

From a Monolith

Don't forget to check that the project compiles, builds and all tests pass after each step.

1. Create a new workspace, and add a new project, with an empty base.
2. Copy all your Monolith's code, including its API into the base, and add all the libraries to the project.

3. This is where the real fun starts, because now we can refactor the code to increase the modularity of the
project. We start by teasing out one component at a time from our base:

It's tempting to do a lot of refactoring during this first component extraction phase, but we'd advise
against that. Instead, we should just extract one component at a time, and change as little as
possible about its structure. This ensures that we don't introduce any bugs, and gives us a known
and stable state to continue from.

When we've finished extracting all the components, we'll have a project that's in much better shape:
Some components will handle a specific part of our domain, some might manage integration with external
systems, and others will be responsible for infrastructure features such as logging or persistence.

From Microservices
Microservices is an architecture consisting of many small Monoliths. This means that migrating to Polylith is
as simple as performing the Monolith migration steps on each service:

It's tempting to try merging components across service boundaries as soon as we notice
similarities. We'd advise against this because during the transition we want every Polylith
Microservice to behave exactly the same as before. This ensures that we don't introduce any
bugs and gives us a known and stable state to continue from.

Once the initial migration of all our microservices to Polyliths is complete, then we can start to refactor.

It's likely that there are a number of common components that can be shared across multiple services.
Resuing components in this way, make our codebase DRY and easier to maintain.

We might also discover that we had prematurely optimized our Microservice architecture for scalability
and/or single responsibility. In other words, we have more services than we actually need to achieve the
scalability we require.

Those additional services come with a hefty complexity cost, so we'll be able to make our life much simpler
by combining them into fewer Polylith projects. Whilst still maintaining the architectural benefits of
separating our code into single responsibility components.

From Serverless
Serverless is an architecture consisting of many Lambda functions. This means that migrating to Polylith is
as simple as performing the Monolith migration steps on each Lambda:

As with Microservices, we are not forced to migrate all our Lambdas at the same time. If we have many
Lambdas this is especially good news, because it allows us to migrate in small and controlled steps.

Defrost our libraries


If we have extracted shared functionality into internal libraries that we maintain, then Polylith gives us the
opportunity to defrost them into living code. Libraries are created by freezing code in time, which leads to
friction in the development experience. By defrosting them into components, we get living code that is easy
to change and which is always in sync with the rest of our codebase.
Production systems
Here we list some companies that use Polylith in production.

Scrintal

The workspace

Scrintal's first commit dates back to April 2019 and the workspace has used Polylith since day one. Polylith
enabled us to experiment fast locally and ship features easily. The complete buy-in to Polylith paid off when
we started pivoting our product at the end of 2020, as even though we are changing it to a completely new
product, most bricks can be shared and reused across all our products.

Funnel
The workspace

Funnel helps companies collect, prepare and analyze all their marketing data with ease. The company was
founded in 2014, has 1000+ customers, and integrates with 500+ marketing apps and platforms. The tech
stack differs from team to team. Python, Typescript, Rust, and Clojure are all languages in use. The Clojure
adopters chose Polylith in order to get a smooth development experience and to be able to separate how
services are deployed and run from how functionality is developed and re-used.

World Singles Networks

The workspace

Connecting hearts all over the world.

World Singles Networks have helped make 4.9 million human connections, on more than 100 web
properties, in every country on the planet. The entire back end of our online dating platform is built with
Clojure and we've been using it in production for over a decade.

Our migration to Polylith started in April, 2021 and, just over a year later, about a third of code has been
migrated (about 49K lines out of 138K). Polylith has enabled us to increase modularity, reduce coupling,
improve testability, and focus more on the importance of naming -- making it easier to find existing code
and to decide where new code should live.
The concept of "swappable implementations" for component interfaces has allowed us to more easily share
code between a variety of applications that need to run in difference contexts, such as running without a
database, or on older JVMs. We're looking forward to having our entire codebase migrated in the next year
or two!

How to be added

Feel free to contact us if you can't find your company here!

How to be added: execute poly ws out:yourcompanyname-ws.edn :no-changes at the


workspace root and mail it to joakim.tengstrand(at)gmail.com along with a brief description of your
experience with Polylith and a link to the company.
Why the name "Polylith"?
The name combines the concept of 'many' with the concept of 'stones'.

Dictionary definition

Polylith:

A prehistoric monument consisting of many stones in one place.

Stonehenge in Wiltshire, England

Think of monuments such as Stonehenge in England, the Drombeg stone circle in Ireland, or the Carnac
stones in France.

Our definition
Polylith:

A software architecture consisting of many building blocks in one place.


Videos
Screencasts and conference talks to explain Polylith.

Polylith in a Nutshell

10-minute overview, by James Trunk

Polylith - the last architecture you will ever need by Joakim Tengstrand and Furkan B…
B…

A 39 minutes overview of the Polylith architecture, by Joakim Tengstrand and Furkan Bayraktar
Meetup: Collaborative Learning - Polylith

A 70 minutes session where Sean Corfield explains how he uses Polylith at World Singles Network

clojureD 2019: "Polylith – A software architecture based on LEGO®-like blocks" by Jo…


Jo…

A 31 minutes introduction to Polylith at clojureD 2019, by Joakim Tengstrand


FAQ
Frequently asked questions.

Question: Are there any plans to add support for cljs/cljc projects, components and bases?

Answer: There are several reasons why we don’t support Polylith for the frontend at the moment. One of
them is that frontend development has other needs than backend development. For example, we often get
instant feedback already when we work with the code in a frontend project. The tooling support to solve that
is already there to give us fast feedback. The languages are also more dynamic, and can work with “living
code” directly instead of using “frozen” libraries. It’s also common to put all frontend code into a single
repository, because the UI is often built as a SPA, which solves much of the sharing problem. Another
aspect of it is that frontend code is quite messy and entangled, which is a hard problem to solve. Different UI
frameworks and libraries have their own solutions to reuse parts of the code by supporting UI components.
It’s possible that Polylith could be a good fit for the frontend too, but we need to try it out more to see if it adds
enough value.

Question: Why not only use components and skip the bases?

Answer: They have different responsibilities. If you start to mix the two, you also lose something. A
component has an interface, and is composable. A base has no interface but instead, it exposes a public
API. The base is the “base” of your projects/artefacts while the components implement the functionality. This
makes it easier to reason about our software.

Question: Why use component interfaces instead of protocols as in Stuart Sierra's component?

Answer: The “problem” with a protocol is that you need a common first argument to dispatch on. With
Polylith's purely “functional interface” approach, that's not needed: they are just regular functions. Another
difference is that protocols live under the same source directory while component interfaces live in their own
source directories together with their components and "come to life" only when they are put together into
projects. This apporoach, combined with the monorepo idea, gives an extra level of flexibility compared to
protocols.

Question: How is Polylith different than Spring Framework or any other Java framework?

Answer: The two are so different so it’s almost hard to know where to start, but here are the main
differences:

1. Spring is based on an object oriented language that encourages the use of mutable state. Polylith uses
stateless functions and encourages the use of immutable state.

2. Spring uses dependency injection in combination with annotations or a configuration file to “glue” the
“building blocks” (objects) together at runtime. Polylith doesn’t use any magic at all, and the way it
“glues” the “building blocks” (components and bases) is to specify the source directories for all
components and bases in a single file at compile time.
3. Spring is a framework with a lot of ready-to-use functionality. Polylith is much simpler and doesn’t
provide any ready-to-use functionality, but instead it helps us structure the code in a way that we can
postpone decisions on how to run our code in production, while maximizing the productivity in our
development environment by letting us work with all the code as if it was a single codebase.

Question: Polylith feels a bit like how node modules work in combination with module bundlers like
webpack.

Answer: Although it might sound similar to a library (or dependency) solution, such as node modules,
Polylith is way more than that. First of all, in contrast to libraries, you are the owner of the Polylith
components, they live in the same place as the rest of your (living) code, and they are not frozen as libraries
are. They ensure encapsulation and composability but at the same time they are simple, easy to reason
about, and flexible. Together with the other concepts introduced with Polylith, it is an opinionated way of
architecting software, rather than a dependency system or bundler.

Question: Is it possible to mix programming languages?

Answer: If we want to mix more than one programming language, so that code can be reused across
language boundaries, then each language has to live in its own workspace. This will work especially well if
we run different languages on top of the same platform, e.g. the JVM (see list of JVM languages). We should
also pick one of the workspaces to be used when building our artifacts.

Let's say we have the languages, Java , Kotlin and Clojure where the latter is the "main" language
we use to build our artifacts from. The first thing to remember is to have different names of the top
namespaces so that we don't run into name conflicts. In this example, we would end up with top
namespaces like: com.mycompany.java , com.mycompany.kotlin and
com.mycompany.clojure .

Because we decided to use Clojure as our "main" language, we need to compile the other two as libraries,
e.g. java.jar and kotlin.jar .

Question: What parts of Polylith are important and what are just “cermony”?

Answer: The short answer is that all parts are needed:

interface: Enables encapsulation and functionality to be replaced in projects/artifacts.


component: The way we package reusable functionality.
base: Enables a public API to be replaced in projects/artifacts.
library: Enables global reuse of functionality.

project: Enables us to pick and choose what functionality to include in the final artifact.
development: Enables us to work with all our code from one place.
workspace: Keeps the whole codebase in sync. The standardized naming and directory structure is an
example of convention over configuration which enables incremental testing/builds and tooling to be
built around Polylith.

Question: What's your experience of working with Polylith in practice? I would like to hear/read more
opinions on Polylith, like from people that have used it in production.

Answer: I (Furkan) am one of the contributors to the Polylith project and I would like to elaborate a little on
this. So far, I’ve been involved in four medium-large scale projects that have used Polylith. I’ve recently co-
founded a new startup called Scrintal and we’ve written its backend using Polylith. I know that there is a little
bit of a learning curve to get used to the Polylith way of thinking. You can think of it as coming to functional
programming from OO. However, once you pass that learning phase, you just start focusing on your
development, your components specifically, rather than thinking about deployment strategies or
architectures. I believe staying productive but at the same time following the current “best practices'' of the
software industry is really hard today. It’s mostly because we need to think about how to deploy what we
create, before we create it, rather than focusing on being productive. At Scrintal, we are using Clojure and
Datomic, which may be considered against the so-called best practice, but it boosts our productivity quite a
lot. Having a single REPL where we can try out ideas is great, especially for a startup. You have to move
fast and pivot easily. However, you shouldn’t be in a position where you write crap code and once the
business takes off, you need to re-think the whole architecture and re-write most of the code. Polylith comes
to the rescue. From day one, by creating small building blocks, you can start testing out ideas in your REPL.
You can grow your building blocks and add new ones, and grow your codebase that way, but at the same
time, Polylith will make sure to keep it simple and untangled. Having small and isolated building blocks
ensures that you don’t create a mess in the end. Later on, you can combine all those building blocks in any
combination and choose any deployment strategy your product needs. For example, we just had a couple of
components in the beginning to test the idea. Later on, we added a simple REST API to deploy it as a single
service. After a while, we hit some performance issues and took a couple of components out of the main
service to create another service. Still, all the code lived in one single repository and was shared across all
services. Polylith allows us to postpone decisions on how to execute our code in production. Instead of
making those decisions early when we know the least, we can make them when we hit a problem in
production, a non-functional requirement that needs to be fulfilled. Finally, to give a little more context, our
backend at Scrintal has around 60 components that are deployed as 4 different services. The very first
commit was in December 2019 and we released the first version in July.

Question: Isn't this the Emperor's new clothes in fresh summer styles? I can't see anything new here except
that you propose using "libraries"!

Answer: Using libraries was actually how it all began. That was a real pain because it slowed us down
significantly. Instead of being able to change the code and get instant feedback from the REPL, we now had
to switch between projects, build a snapshot library and restart the REPL. When you do that hundred times
per day, it really starts to slow you down. With components, you can work with all your code from one place,
using a single REPL. If you zoom in to the different solutions, you will not find anything new here, but when
you start working with Polylith, you realize that the separation between development and production actually
is a new idea, that components and bases are valuable new concepts and that being able to combine
blocks of code in a Lego-like way is very powerful, simple, and makes you more productive in the end. I try
to explain all of this in this video.
Question: What's the point of an interface if you can't (or can you? how?) swap out the implementation?
(see: OCaml's modules)

Answer: Each component contains its own interface file/namespace. If two components are using the same
interface, the contract of the interface is the combined set of def/defn/defmacro definitions for both
components. If any of them don’t implement the full set, then the tool will complain when running the
check , info or test command.

We have an example in the Profile section of the tool documentation, where both the user and user-
remote components implement the user interface. The components live in two separate directories,
under the components directory, and both use the se.example.user namespace but with different
implementations in their core namespace:

▾ workspace
▾ components
▾ user
interface.clj
core.clj
▾ user-remote
interface.clj
core.clj

The example starts with a command-line project that contains the user component, but then we “swap”
(at compile time) to the user-remote component, by specifying the source directory of user-remote
instead of user in projects/command-line/deps.edn . This is described in detail in the Profile
section of the tool documentation.

Question: If I have two bases (say http-api and mq-api ), can I easily configure the build to produce a
single artifact, that includes both bases?

Answer: Yes, you include the bases in your deps.edn configuration file for the project, e.g.: {:deps
{poly/http-api {:local/root “../../bases/http-api/src”} poly/mq-api
{:local/root “../../bases/mq-api/src”} ... } You can put any combination of components
and bases in a project and build a single artifact out of it. We don’t support switching components at run-
time. If you need polymorphism, then you can solve it by using multi-methods to switch between two different
components.

Question: Aren't "pass-through" functions used in the interfaces kinda stupid, when you instead can use
import-vars (https://github.com/aleph-io/potemkin#import-vars) to "import" them?
Answer: The import-vars macro is kind of cool, but we have decided to keep it as it is. The main
reason is that consistency and simplicity have a great value to us. Using a macro could have been an
alternative if it solved the whole problem, but unfortunately, we will end up with a mix of this macro and
explicitly declared functions, which is less consistent and adds complexity. By making the
def/defn/defmacro statements explicit in the interface namespace(s) we also get a lot of flexibility, see
the end of the interface section of the tool documentation.

Question: How to design good interfaces?

Answer: Polylith gives you the “tool” here, but it’s up to you to decide what is a good or bad interface. Our
best advice here is that you have a look at the realworld example app and the Polylith codebase itself to get
some answers/inspiration.

Question: How to grow and extend interfaces?

Answer: We normally add one more function at a time to an interface when we need more functionality. We
also change the name of a function when we found a better name. We use different techniques to improve
the readability of the interface which you can read about at the end of the interface section. When a part
within a component can be used somewhere else, we extract it to a new component to get rid of code
duplication. In that case, the functions that previously lived in the first component’s interface will now live in
the new one. In general, try to communicate what the interface does and/or is, as clearly as possible.

Question: Does Polylith allow you to upgrade a system while it’s running?

Answer: Polylith doesn’t help you with that. Polylith helps you with a lot like separating development from
production, but it’s not what e.g. Spring is for Java.

Question: How to handle state?

Answer: The short answer is that this is also handled by you as a developer, by using an existing library or
tool. This is explained In the profile section.

Question: When we program we want to structure the code into “difficult” and “easy” modules. The difficult
modules should be few and written by expert programmers. The easy modules should be many and written
by less experienced programmers. How does Polylith allow this?

Answer: You are free to organise your components, bases, and projects in any way you like in Polylith.
Because it’s so easy to refactor a Polylith codebase, it’s also easy to adjust the design while you go, without
painting yourself into a corner. If you prefer to divide the codebase into “difficult” and “easy” components,
that’s fine, but we don’t have strong opinions about this, because people have different perspectives on
what is good or bad practice/design.
Who made this?
The team who created Polylith, and how you can get in touch with us.

The Polylith team


Team member Role Contact

Joakim Tengstrand "Father of Polylith". Invented the joakim.tengstrand[at]gmail[dot]


Polylith architecture, developed om

the first Polylith system, @jtengstrand


developed the Leiningen plugin, ​
https://linkedin.com/in/joakim-
developed the tools.deps based tengstrand​
Polylith tool and its GitBook
documentation, co-authored the
presentation and this GitBook
documentation.

Furkan Bayraktar Co-developed the first Polylith me[at]furkanbayraktar[dot]com


system, developed five @furkan3ayraktar
production Polylith systems, ​
started the development of the https://linkedin.com/in/furkanba
tools.deps based Polylith tool, raktar​
co-developed the Leiningen
plugin, developed the
RealWorld example app.

James Trunk Led the Polylith team, authored james.trunk[at]gmail[dot]com


the presentation and this ​https://linkedin.com/in/james-
GitBook documentation. trunk​

You might also like