Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Disclaimer

The copyright in the contents of this ebook belong to Thought Machine Group Limited (unless expressly stated
to belong to another person).

The information contained in this ebook is confidential and may not be copied, distributed, published, or
disclosed without the express prior written permission of Thought Machine Group Limited.

This ebook is intended for informational purposes only and should not replace independent professional advice.

No representation (whether express or implied) is made as to, and no reliance should be placed on, any
information, including estimates, projections, targets and opinions, set out in this ebook, and no liability
whatsoever is accepted as to any errors, omissions or misstatements contained in this ebook, and, accordingly,
neither Thought Machine Group Limited nor any of its officers or employees accepts any liability whatsoever
arising directly or indirectly from the use of this ebook for any purpose.

The provision by Thought Machine Group Limited of any service or product described in this ebook is subject to
contract.

In transcribing and translating this course, effort has been made that is designed to ensure that it is an accurate
reflection of the course content, however there may be some differences in the texts and in the case of any such
difference, the English language version shall prevail.

Vault Fundamentals
Classification | Confidential 1 ©2024 Thought Machine
Contents

About this Course 2


Aims and Objectives 4
Course Agenda 5
Module 1 6
Legacy Core Architecture 7
Data management - design limitations 8
Consequences of the design 12
Vault Core overview 15
The Vault core platform 16
Vault Core Application Architecture 21
Banking on Vault Core 22
Legacy core architectures vs. Vault Core 23
Banking on Vault - Learning and Principles 26
Legacy core architectures vs. Vault Core 27
Creative migration strategies 28
Vault Core API’s 29
The operations dashboard 35
What is operations dashboard 36
Configuration Management 38
Configuration data in Vault 39
Data Model Overview 41
Entity Model Overview 42
Vault Entity Model 45
Course recap and wrap up 46

Vault Fundamentals
Classification | Confidential 2 ©2024 Thought Machine
About this Course

Transcript
In this elearning course you will learn how Vault core Architecture is built to support banks to address many of
the challenges facing core legacy banking systems.

This course is suitable for Vault, Business Analysts, Engineers, Solution Architects and SRE’s

Let’s take a quick look at the Aims and Objectives of this course, next.

Vault Fundamentals
Classification | Confidential 3 ©2024 Thought Machine
Aims and Objectives

Transcript
Having completed this course, you will be fully conversant with how the Vault Core architectural model supports
banks of all sizes to modernise their infrastructure and deliver real benefits to all stakeholders, which will allow
you to:

~ Describe Vault Core architecture.

~ List some of the key benefits that our architecture provides.

~ State what the 4 entities are that Vault ocre is responsible for.

~ Explain what an ideal core banking platform should provide.

And lastly before we get started let’s review the course contents.

Vault Fundamentals
Classification | Confidential 4 ©2024 Thought Machine
Course Agenda

Transcript

This course comprises 6 modules, which provides an introduction to the Vault Core Architecture and contrasts
and seeks to compare this to a typical legacy core banking system.

In the process we will consider what the advantages and disadvantages are of both architectures.

That completes our brief introduction.

In our first module we will consider some of the challenges and limitations of a typical legacy core banking
system

Vault Fundamentals
Classification | Confidential 5 ©2024 Thought Machine
Module 1

Transcript
Welcome to Module 1.

In the first module we will explore what a legacy core banking system looks like and consider what challenges
this poses for banks looking to modernise their infrastructure and deliver real benefits for all stakeholders.

This will set the scene and provide us with a good base from which to delve into the solutions offered by a
modern core banking system such as Vault Core.

So let’s get started.

So let’s consider what we mean by legacy core architectures and what this means for most banks.

Legacy architectures will generally have a large amount of data and configuration which needs to be kept in
sync, such as;

~ Customer master data


~ Balances and transactions
~ Accounting data (used for regulatory reporting and risk)

If this data falls out of sync then incorrect information may be used in reporting or calculations, which is not
acceptable.

Let’s take a look at an example of this.

Vault Fundamentals
Classification | Confidential 6 ©2024 Thought Machine
Legacy Core Architecture

Transcript

Here we see some elements of a legacy core architecture, let’s explore this example a little more:

In this legacy core example we see that there are two core banking systems, each storing Customer Data,
Customer Balance Data and data relevant for group accounting purposes.

These cores will provide data to the Operational Data Store through batch file extracts performed on a regular
(usually daily) basis.

Each time an extract of data is performed, the bank will need to complete reconciliation between the source
system and the destination system.

This becomes quite a complex set of tasks when you factor in the amount of data that is repeated across
systems.

As shown in this diagram we have customer data in 3 systems, balances data in 4 systems and accounting data
in 5 different systems!

Needless to say in order to keep all of this repeated data in sync we need to perform regular batch updates.

Vault Fundamentals
Classification | Confidential 7 ©2024 Thought Machine
Data management - design limitations

Transcript

In a legacy core, we will need to trade-off between how fresh data is and the amount of work required to both
complete the batch process itself and any reconciliation required.

We cannot constantly perform batch extracts of data because this will take up valuable resources that are
required to perform BaU processing activity during the day.

However, not refreshing data enough will significantly increase the risk of data inconsistency across systems.

The tradeoff between the two options will typically lead to batches being done from a core’s reporting database
once or twice a day, with a team doing a mix of manual and automated checks on the data to ensure it is
successfully extracted.

Vault Fundamentals
Classification | Confidential 8 ©2024 Thought Machine
Transcript
Each extraction of data is generally driven on a batch schedule that does not know about previous or subsequent
steps.

For example we might extract incomplete data from our Core to our middle tier, and subsequently to reporting
databases in the Group Services area of the bank if the core banking system extract failed to complete without
errors.

This example illustrates the problem - customer data has been incorrectly extracted from the core and is now
wrong in the Customer Master System and also wrong in the banking group services area too.

Vault Fundamentals
Classification | Confidential 9 ©2024 Thought Machine
Transcript
Each data extract requires reconciliation to ensure correctness with the source system, for example validating
account balances.

In reality reconciliation will involve validating a broad range of different characteristics of the data we have
extracted from the core.

This will include carrying out both automated and manual checks which we can see here in this example, such as
validating that the:

~ ODS account balances are correct for: account 1 = $10 and account 2 = $14

~ Deposit products core accounts, balances for; account 1 = $10 and account 2 = $14

~ The number of accounts is 2

~ Banking group services, the total sum of account balances is $24

This reconciliation will be done through the use of scripts, reconciliation applications and some manual checking
of generated reports.

Vault Fundamentals
Classification | Confidential 10 ©2024 Thought Machine
Transcript
Data is not always available in real time to downstream systems from a legacy core, and in a legacy core
architecture.

This is for two reasons:

~ Firstly we use batch processing on a scheduled basis to get data from the core.

~ Secondly ideally we need to complete the reconciliation of our data before it is used so that we can be
confident that our data is consistent across our banking systems.

As a consequence, the bank cannot do real time analytics or have a full up to date understanding of what is
going on within the bank’s systems minute by minute, and must rely on a snapshot taken at defined intervals
during the day usually once a day.

And as we all know a day is a long time in the world of finance and banking.

Vault Fundamentals
Classification | Confidential 11 ©2024 Thought Machine
Consequences of the design

Transcript
Legacy migrations are often the single most challenging and daunting element of a core banking transformation
programme, with the effort underpinned by a number of challenges.

Let’s consider what some of these are, starting with, Batch-based migration file transfer:

Each batch load demands a significant overhead which limits the frequency and how long data can practically be
maintained for. Secondly where we have Multiple Load Touchpoints:

This is where data is required to be loaded in a specific order this is usually achieved through slow waterfall
sequencing of data. Thirdly, we have complex load orchestration:

The load process involves multiple steps and touchpoints, increasing overall effort and risk of error at each stage
of the process.

Fourthly, we have Batch-based migration file outputs:

These are event outputs generated after an entire load completes, making it challenging to know when an
individual record has actually been loaded.

Our fifth design challenge is where we have Limited Testing Of Scheduled Behaviour.

These are challenges in testing scheduled account behaviour without creating complex test frameworks or
letting real time pass. And our sixth challenge is that core banking capabilities limit potential migration strategies,
pushing banks towards unfavourable Big-bang approaches. Taken together these challenges are a significant
obstacle to banks wishing to provide modern banking services at speed and scale in a digital age.

Vault Fundamentals
Classification | Confidential 12 ©2024 Thought Machine
Transcript
Not wishing to labour the point but there are other significant challenges for banks using legacy core systems,
some of which I will close with here.

Legacy Core Banking systems typically provide no code solutions for building financial products, with a fixed set
of schedules for end of day processing activities and other scheduled events.

This tends to lead to products which are largely the same with little differentiation between different banks, and
difficulties in personalising products for the bank’s requirements or the customer’s needs.

Banks have been working around these issues for 20 years or more and have come up with a range of ‘so called’
solutions to these issues.

However in a lot of cases these workarounds become a significant barrier to modernising a banks infrastructure
and only serve to add costs and complexity whilst reducing the ability to innovate and provide the market with
what it wants when it wants it.

So, before we close out this module, let me share how some banks tactically work around these issues?

Vault Fundamentals
Classification | Confidential 13 ©2024 Thought Machine
Transcript

We will hear statements like this;

On Data Accuracy.

We can “solve” the problem of having many sources of truth of our data, by using batches and reconciliation.

On Timeliness of Data

We can solve the freshness of data (i.e. the risk that someone will take out a loan in one system, and then in
another one before we propagate the change) by having more frequent batch updates.

On Real-Time Data

As the payment systems are plumbed into the core banking system, we can solve the lack of real-time data
(notification, reporting etc) by wrapping them in middleware and notifying customers from there (another stateful
source).

On system Performance

We can solve the poor performance of the CBS by introducing data caches and good data practices in our middle
tier.

On system support

We can work around differences in operational support using manual effort and we must document different
processes for modifying different records on different systems.

Now I am not saying that I disagree with any of these statements, however all of these work-arounds add
additional processing time, costs, and manual complexity.

Furthermore, most of these solutions introduce or require changes to the core banking systems which are “high
energy changes” and happen at a low frequency, mainly due to the risks involved and the complexity of the
interrelated tasks.

It’s these and many other challenges that Vault Core sets out to address, which is what we shall consider in the
modules which follow.

That completes this module.

Vault Fundamentals
Classification | Confidential 14 ©2024 Thought Machine
Vault Core overview

Transcript
Welcome to Module 2.

In this module we will explore the architectural model provided by Vault Core and delve into the component parts
which make up the platform.

This will provide us with a good base from which to begin our examination of the solutions offered by a modern
core banking system such as Vault Core.

So let’s get started.

Vault Fundamentals
Classification | Confidential 15 ©2024 Thought Machine
The Vault core platform

Transcript
Here we see a representation of the Vault core platform.

Vault core is a cloud-native core banking platform which uses an open architecture platform and is highly
scalable.

It’s built on a distributed microservices architecture, utilizing containers and Kubernetes.

These enable the development of a client’s custom product portfolio to be completed much faster compared to
typical legacy platforms.

Indeed you may recall how in legacy architectures product functionality is built into the core system and can’t
easily be extended or tailored by the bank.

Vault Core addresses this by providing unparalleled product flexibility and personalisation for customers, while
maintaining a single common platform across all banks, with all functionality exposed via our 3 key APIs as shown
here.

~ The Core API integrates with external applications and enables processes like authentication, account
management, smart contract simulation, roles management and more.

~ The Postings API is responsible for managing all financial movements on accounts held within Vault Core.

~ The streaming API provides native support for real-time events streaming, using Kafka, such as; accounting
events, posting balance updates, customer and status changes, to mention a few there are many more.

Vault Core is also a system of record which provides a set of tools for the management of products, accounts,
and postings.

Vault Fundamentals
Classification | Confidential 16 ©2024 Thought Machine
Products in Vault are implemented using a design pattern called smart contracts, which allows banks to easily
build and manage product catalogues by allowing them to reuse smart contract code in multiple products.

An optional component, called the Operations Dashboard provides a user interface to interact with the core
banking platform.

Vault Fundamentals
Classification | Confidential 17 ©2024 Thought Machine
Transcript
Vault Core sits as the core banking system within a bank’s technology stack.

This diagram gives you an idea about the number of integrations you may need to consider when deploying a
new banking core in your organisation.

The advantages of APIs that are secure, scalable, and promote loosely coupled integrations that minimise
dependencies between systems should be clear to see!

A diverse set of core integrations are typically required.

Monolithic legacy cores typically use tightly coupled, point-to-point interfaces when connecting to other
systems.

This makes maintaining them a costly and time consuming activity, and change can be risky.

Extensive testing is usually required even for small updates.

In comparison, Thought Machine has implemented a suite of well designed APIs.

This ensures that our services can be consumed by a wide range of clients and systems.

Vault core allows a bank to run any entity and any product, in addition banks have the flexibility to choose any
cloud hosting option to support the Vault core platform.

Vault Fundamentals
Classification | Confidential 18 ©2024 Thought Machine
Transcript
There are four main entities that Vault is responsible for:

The first is Products.

Vault is responsible for the manufacturing and operation of banking products, and these are expressed as smart
contracts.

~ Vault smart contracts are written in Python Code and include the financial logic for your products, such as the
minimum balance or maximum spending rules

~ Once you have written and tested your code, you will upload it to Vault.

~ Vault will then run the code you’ve written, and the relevant microservice will carry out the relevant tasks as
dictated by your code.

The second entity is Accounts, of which there are two types, Internal and Customer accounts, here we will focus
on Customer Accounts.

A Customer account in Vault is an instance of a Product.

This means simply that an account in Vault is backed by a specific version of your smart contract and the code in
this smart contract is run when you open a Customer account in Vault.

Accounts within Vault have balances, and these balances are used in order to allocate the positions a customer
holds.

In other words does the customer owe the bank money or does the bank owe the customer money?

Vault Fundamentals
Classification | Confidential 19 ©2024 Thought Machine
You will be able to determine this by looking at the different balance addresses on your accounts in Vault.

Third, we have postings.

Vault is responsible for generating and recording movements of funds between accounts on Vault.

~ Postings can be generated within Vault core either by using the Postings API or the contracts API.

The Postings API will be used in payments lifecycles to action the transfer of funds, hence you will make postings
across various accounts and balances using our postings API.

The Contracts API is able to generate postings from the smart contract when needed, for example on an interest
or fee accrual event the smart contract can autonomously make postings to facilitate the accrual of interest.

Vault Core’s Ledger will record the postings that have been successfully made against an account.

The ledger itself provides data in real time and can capture multiple types of assets and currencies with balances
that can be configured into any structure.

Lastly we have customer entities.

It’s important to note that Vault itself has not been designed to store PII in a production environment.

We expect Customer Data to be stored outside of Vault in a Customer Relationship Management system CRM .

The customer entity in Vault will be used to indicate that a customer has a holding in an account, and this will be
linked to your actual customer entity outside of Vault using a unique ID.

Vault Fundamentals
Classification | Confidential 20 ©2024 Thought Machine
Vault Core Application Architecture

Transcript
An ideal core banking platform should promote self-service, product personalisation and the ability to easily
make changes, test and learn from it.

In addition it should not provide a heavily customised platform which cannot be upgraded without Vendor
involvement.

Vault satisfies both of these requirements by separating the platform into two layers as indicated here. The
configuration layer is comprised of the Chart of Accounts and smart contracts.

This layer is specific to each client and is intended to be customisable by the client, using the training,
documentation and tooling provided by Thought Machine.

This means the client can build financial products and their internal accounting structure in accordance with their
own requirements, without having to rely on Thought Machine to do so.

The second layer is the platform layer, which is common to all of our clients.Thought Machine provides changes
and new features regularly through minor and major releases.

The platform layer is built by Thought Machine only and clients use APIs to upload their configuration layer
content to the platform layer and interact with the Vault system.

You can access additional information on both of these layers through our Documentation Hub.

That completes this module.

Vault Fundamentals
Classification | Confidential 21 ©2024 Thought Machine
Banking on Vault Core

Transcript
Welcome to Module 3.

In this short module we will take a look into how Vault core addresses some of the key challenges facing banks
wishing to move away from legacy core banking platforms.

Vault Fundamentals
Classification | Confidential 22 ©2024 Thought Machine
Legacy core architectures vs. Vault Core

Transcript
To resolve some of the duplication of data, services and functionality we see in legacy core banking system
architectures.

Vault core offers a single core platform that can run any financial product with only configuration layer (smart
contract) changes which solves many of the data reconciliation issues.

Vault Fundamentals
Classification | Confidential 23 ©2024 Thought Machine
Transcript
To resolve some of the inflexibility and batching of processes we see in legacy core banking system
architectures.

Vault core allows you to create multiple schedules for each product to control their own schedules and
parameters (no requirement for global batches and no core banking deployments required) which simplifies the
management of multiple similar products.

In addition, because Vault has a low code solution to developing financial products, banks are able to
differentiate themselves and their offerings from each other in order to service their own specific requirements.

For example products that replicate back book products or products that meet customer’s specific needs through
personalisation.

Vault Fundamentals
Classification | Confidential 24 ©2024 Thought Machine
Transcript

Any card or payment processor can hook into Vault in the same manner.

In order to avoid specific components for specific payments integrations, Vault uses a single postings / payments
API for all integrations (card, payments etc).

Any card or payment processor can hook into Vault core in the same manner regardless of jurisdiction or scheme.

Vault Fundamentals
Classification | Confidential 25 ©2024 Thought Machine
Banking on Vault - Learning and Principles

Transcript
Within Vault, payment devices are independent from the product implementations.

Your physical debit card is not dependent on your account logic.

One payment device can therefore be associated with multiple different accounts and multiple different types of
products.

For example, you could have one payment card associated with both a debit account and a credit card account
and the customer can switch between the two products to use it at will.

Vault Fundamentals
Classification | Confidential 26 ©2024 Thought Machine
Legacy core architectures vs. Vault Core

Transcript
With its Microservices based Cloud Native solution, Vault Core is designed to run in a single region in a highly
available configuration.

There are also multi-region DR strategies available.

You can take advantage of managed databases offered by CSP’s and this removes the need to manually backup
and restore databases.

This is in contrast to a Core legacy banking platform which is likely to use a manual backup and restore
procedures.

Vault Fundamentals
Classification | Confidential 27 ©2024 Thought Machine
Creative migration strategies

Transcript
Regarding Migration, Vault Core’s migration product offering has been designed to de-risk and simplify legacy
core migrations.

With six key properties:

~ Firstly, Data can be easily migrated and maintained at an account level using API entry points that support
real-time sync.

~ Secondly, we have native dependency management, which means you can set configurable dependencies in
the API request, enabling data to be sent in any order and loaded automatically when dependencies met.

~ Thirdly, the Load process can be fully automated from point of sending API request to receiving streamed
events.

~ Fourthly, we support Real-time events streaming from Vault which enables faster reconciliations and provides
immediate data to support downstream business processes.

~ Fifth, you carry out time series testing, meaning you can fast-forward account activities as part of an out of
the box test to quickly prove back book product behaviour against a known baseline.

~ Lastly, we provide innovative migration strategies from core banking platforms to Vault Core. For example
Production parallel run and other migration designs are enabled by Vault’s event-based streaming architecture

That completes the final topic for this module.

Vault Fundamentals
Classification | Confidential 28 ©2024 Thought Machine
Vault Core API’s

Transcript
Welcome to Module 4.

In this module we will consider the use of Vault core API’s and apps.

Vault Fundamentals
Classification | Confidential 29 ©2024 Thought Machine
Transcript
Vault API’s are split into three broad categories as indicated in this diagram. The Postings API is used for
instructing movement of funds and observing the response

The Core API is used for configuration and setup of Vault Core.The Streaming API is used for observing all
activity in the system (including postings)

Internally, key core banking capabilities are delivered using a microservice architecture, these include

Products & Accounts

Handling of smart contracts that you build to define the financial logic and life-cycles for your products
Customer accounts that are backed by your smart contracts
Internal accounts representing your chart of accounts and support for double entry bookkeeping

Flags and Parameters

Management of the attributes that direct the flow of the financial logic you have implemented in your smart
contracts

Postings & Balances

The ledger service prioritises and processes postings from the high and low priority topics
The point in time processor handles postings from scheduled events
Balance watermarking is an efficient way of tracking accounts balances at balance address level within accounts.
Balances are the side effect of postings.

Calendars and Schedules

These resources together drive the timing and execution of events in Vault as defined by your smart contracts

Vault Fundamentals
Classification | Confidential 30 ©2024 Thought Machine
Transcript
So let’s take a closer look at the three main API’s Vault core relies on, starting with the Core API.

The Core API is used for integration with external applications such as; Channels, CRM, Bank Operator UIs and
other systems.

It allows you to carry out calls to resources within Vault such as:

~ Account Management - you can manage customer accounts that reside in Vault core.

This includes the ability to open new accounts, change the status of existing accounts, update the instance
parameters for accounts and close accounts on Vault.

~ Product Management - you can upload and change smart contract versions from the Core API.

~ Restrictions - you can restrict a customer account in the event of a card being lost, an account being
suspected of malicious activity or any other reason to restrict an accounts postings.

You can also restrict an account from being closed or updated, effectively freezing it.

~ smart contract Simulation: You can use the simulation part of the core api to simulate smart contract code as
part of your testing.

~ Internal Accounts - here you can manage your Vault aspect of your chart of accounts

Payment Devices - you can add and remove payment devices from customer accounts

~ Roles management - here you can manage employee roles and service tokens

Vault Fundamentals
Classification | Confidential 31 ©2024 Thought Machine
~ Flags - you can add flags to accounts and customer entities, flags allow you to impact smart contract
behaviour with custom reasons.

The Postings API is responsible for managing all financial movements on accounts held within Vault, journeys
you’ll use it for include:

~ Authorisation check on accounts

~ Settlement of funds on accounts

~ Release of funds

~ Posting credits and debits to Vault Core

The streaming API provides native support for real time events streamed out of Vault core based on entries
written to the Vault core Database.

Events include: customer Account change events, Accounting events, Posting Balance updates, Customer entity
status changes, Payment Device events, smart contract activation, configuration changes, and versioning
changes.

Vault Fundamentals
Classification | Confidential 32 ©2024 Thought Machine
Transcript
By way of an example let’s walk through this set of Vault Core components highlighting some key aspects,
including additional APIs.

Within Vault Core there are a significant number of microservices carrying tasks out such as:

~ Executing the schedules for products

~ Managing the processing of a posting that has been made and much more

These microservices work together to make up Vault Core.

We also have the smart contracts which are the banking products you will be using on the system.

The Vault Postings Ledger will be used in order to capture any postings that are made to Vault core.

You will interact with the Vault Core components using our APIs.

In addition to the 3 main api’s we have seen already, Vault core uses a range of other API’s, one of which is the
Data Loader API.

The Data Loader API is used during the migrations process to migrate data from a legacy core to Vault core.

The data loader allows you to asynchronously load data into Vault via Kafka.

There are a few key properties of the Data Loader API worth a mention here;

~ Firstly it provides a single API entry point for all Core data (except Postings).

~ Secondly it automates the loading process from sending the API message to receiving streamed events.

Vault Fundamentals
Classification | Confidential 33 ©2024 Thought Machine
~ It also has the ability for Idempotent requests for simple retries or error handling.

The Data Loader takes care of the “Load” part of the ETL Extract Transform Load) process.

In other words you will have already extracted and transformed your data into the appropriate format before
using it to load data into Vault core.

The postings API will be integrated into the Payments Processor or Card Processor, depending on the payments
journey you are planning to undertake.

Bear in mind that all payments and card processing systems will hook into Vault core in the same manner.

Both of these integrations will need to come from outside of Vault Core, meaning Vault core is not responsible for
the activities these systems will carry out.

Outside of the payments processing aspect, you will have the actual payments or card scheme and the fraud
engine(s) within your bank.

Highlighting the streaming API aspect - you will be able to integrate systems such as your Data Warehouse
system(s) into the streaming API to consume events from vault as they are streamed out.

We recommend you consume all events that are streamed out of Vault core in order to build a complete picture in
your data warehouse.

Moving on to the Operations Dashboard, this is a web based optional application that allows back office users to
carry out various operational tasks using a GUI, integrated into our Core API’s.

The Operations Dashboard has full role based access control for back office users.

In this diagram we have provided a box ‘middle tier’, this is where you will integrate Vault into your wider banking
ecosystem, you may decide to use a domain connector to align data to your enterprise architecture standards.

Connected to the middle tier systems will be your customer channels, including any customer notifications
services, the authentication for customers, the systems of engagement for your customers and more.

Hopefully this example has given you a good understanding of where and how our API’s can be used to interact
with the wider banking ecosystem.

That completes this module.

Vault Fundamentals
Classification | Confidential 34 ©2024 Thought Machine
The operations dashboard

Transcript
Welcome to Module 5.

In this module we will explore the optional browser-based application known as the Operations Dashboard or ops
dash for short.

Vault Fundamentals
Classification | Confidential 35 ©2024 Thought Machine
What is operations dashboard

Transcript
The Operations Dashboard is a browser-based application that enables interaction with the Vault Core Platform
and Configuration Layer components.

Its purpose is to provide an interface to allow back office staff to interact with and investigate account changes,
postings, schedules, customer entity changes and more.

User Access is tightly controlled:

~ Each operation in the bank has a permission.

~ Sets of permissions are grouped into roles.

~ Each employee is assigned a role.

Vault Fundamentals
Classification | Confidential 36 ©2024 Thought Machine
Transcript
The Operations Dashboard tools for servicing the accounts and posting records held in the Platform Layer and
configuration management tools to upload and manage new smart contracts.

We can drill down further into customer entities as indicated here, searching by customer using identifying
information (when using Vault to store such information)

We can investigate any notes or transactions associated with an account and action processes associated with
an account.

Vault Fundamentals
Classification | Confidential 37 ©2024 Thought Machine
Configuration Management

Transcript
The Operations Dashboard provides access to configuration management tools, as shown here.

This allows users to manage configuration data in the system and to upload new smart contracts that define new
bank products.

Note I have removed some sections such as ‘Related processes’ and ‘Associated tasks’ from this screenshot
simply to show the contract upload section.

If you would like further information about this functionality please visit our documentation hub and or reach out
to your Client Success Manager.

That completes this module.

Vault Fundamentals
Classification | Confidential 38 ©2024 Thought Machine
Configuration data in Vault

Transcript
Welcome to Module 6.

In this module we will take a look at some key user journeys to help pull all the pieces together.

Vault Fundamentals
Classification | Confidential 39 ©2024 Thought Machine
Transcript
smart contracts can be deployed to a Vault instance, and managed, using three tools.

These are; the Operations Dashboard, the Core API, and the Configuration Layer Utilities.

The Operations Dashboard and the Core API are typically used during proof-of-concepts or ad-hoc testing.

Most banks will want to integrate deployment of smart contracts into their CI/CD pipelines, for this, they should
use the Vault Configuration Layer Utilities.

Visit our documentation hub for further information about deployment options.

Next we will walk through some typical user journeys to see how our configuration data operates in some real life
examples.

Vault Fundamentals
Classification | Confidential 40 ©2024 Thought Machine
Data Model Overview

Transcript
There are several different categories of data that we use and store in Vault.

These are; configuration, financial, supplementary, and service data.

Configuration data controls the application logic of the system, this is achieved by using smart contracts.

smart contracts represent bank products, they define what bank products are available, and how they behave.

Configuration Layer Utilities are used to deploy smart contracts and define global variables.

Financial data is the data that is generated through normal operation of the bank.

This includes the Postings Ledger, Accounts Data Store and Balance Data Store, which are the source of truth in
Vault, as well as account data.

This data is stored in the Vault Core.

Supplementary data is stored and used by Vault Apps.Service Data is data primarily used internally for monitoring
and supporting Vault instances.

This includes logs, for viewing information about the internal operation of individual services, monitoring for
observing the traffic health and state of every service.

It also includes tracing data for tracking individual calls through Vault’s microservices to pinpoint issues and
understand system behaviour.

Together, configuration, financial, supplementary, and service data make up the Vault data model.

Vault Fundamentals
Classification | Confidential 41 ©2024 Thought Machine
Entity Model Overview

Transcript
In Vault, we have three main entities; postings, balances and accounts

A Posting is the movements of funds within accounts on Vault Core.

Balances are aggregations of postings.

An account is a lightweight object that is associated with the resources that make up a bank account, it will
contain one or more stakeholders, and one or more balances, in Vault a customer account is an instance of a
smart contract,

Let’s look at each of these in a bit more detail.

Vault Fundamentals
Classification | Confidential 44 ©2024 Thought Machine
Vault Entity Model

Transcript
A posting represents a debit or a credit in Vault.

Postings are instructed using the Postings API and are stored in the append-only Postings Ledger, meaning that
once a posting has been committed it cannot be removed or edited. The Postings Ledger is the source of truth,
for the financial state of Vault. There are multiple different types of postings used within the Vault platform
which provide the degree of flexibility required by most Banks within a core banking platform. Balances are the
sum of postings, that is the net result of all of the postings that have been made.

A balance in Vault is stored against an account address. The net balance is calculated according to whether the
address is associated with an asset or liability account.

Our final component is accounts,

An Account in Vault is one of two types: Internal Account: Used to allocate and track funds owned by the
financial institution and is not governed by smart contract logic

Customer Account: Associated with one or more of the banks customers and is governed by smart contract
logic. Customer Accounts in Vault are partitioned into one or more addresses, each storing its own balance. The
account address structure is dictated by the smart contract that the account is associated with, however,
addresses are also dynamically created when it is targeted by postings that are accepted against the account."

In total these addresses represent the financial state of the account. To appreciate Vault Core in context in a
wider banking ecosystem please access our Vault Core Reference Architecture course.
That completes this module and the final topic, so let’s have a final recap of what we have covered on this
course.

Vault Fundamentals
Classification | Confidential 45 ©2024 Thought Machine
Course recap and wrap up

Transcript

Here we see a brief reminder of the topics we have covered

We began by exploring what a typical legacy core banking architecture contains and identified a number of
challenges this poses to banks looking to modernise their approach to banking.

We also considered how Vault core can meet many of these challenges and provide banks with the control they
need over product development and management on a day to day basis.

Please visit our Documentation hub for further information about any of the topics covered on this course and or
reach out to your Client Success Manager.

Vault Fundamentals
Classification | Confidential 46 ©2024 Thought Machine
©2024 Thought Machine Group. All rights reserved.
Thought Machine Group a limited company registered in England & Wales

Registered number: 11114277.


Registered Office: 5 New Street Square, London EC4A 3TW

You might also like