Professional Documents
Culture Documents
Tigergraph: A Native MPP Graph Database: Alin Deutsch Yu Xu Mingxi Wu Victor Lee
Tigergraph: A Native MPP Graph Database: Alin Deutsch Yu Xu Mingxi Wu Victor Lee
guage, GSQL, is designed for compatibility with SQL, while phisticated graph analytics using a high-level language called
simultaneously allowing NoSQL programmers to continue GSQL. We subscribe to the requirements for a modern query
thinking in Bulk-Synchronous Processing (BSP) terms and language listed in the G-Core manifesto [3]. To these, we add
reap the benefits of high-level specification. GSQL is suffi- • facilitating adoption by the largest query developer
ciently high-level to allow declarative SQL-style program- community in existence, namely SQL developers. GSQL
ming, yet sufficiently expressive to concisely specify the so- was designed for full compatibility with SQL in the
phisticated iterative algorithms required by modern graph sense that if no graph-specific primitives are mentioned,
analytics and traditionally coded in general-purpose program- the queries become pure standard SQL. The graph-
ming languages like C++ and Java. We report very strong specifc primitives include the flexible regular path expression-
scale-up and scale-out performance over a benchmark we based patterns advocated in [3].
published on GitHub for full reproducibility. • the support, beyond querying, of classical multi-pass
and iterative algorithms as required by modern graph
1 INTRODUCTION analytics (such as PageRank, weakly-connected com-
Graph database technology is among the fastest-growing seg- ponents, shortest-paths, recommender systems, etc.,
ments in today’s data management industry. Since seeing all GSQL-expressible). This is achieved while stay-
early adoption by companies including Twitter, Facebook and ing declarative and high-level by introducing only two
Google, graph databases have evolved into a mainstream tech- primitives: loops and accumulators.
nology used today by enterprises across industries, comple- • allowing developers with NoSQL background (which
menting (and sometimes replacing) both traditional RDBMSs typically espouses a low-level and imperative program-
and newer NoSQL big-data products. Maturing beyond social ming style) to preserve their Map/Reduce or graph
networks, the technology is disrupting an increasing num- Bulk-Synchronous Parallel (SP) [30] mentality while
ber of areas, such as supply chain management, e-commerce reaping the benefits of high-level declarative query spec-
recommendations, cybersecurity, fraud detection, power grid ification. GSQL admits natural Map/Reduce and graph
monitoring, and many other areas in advanced data analytics. BSP interpretations and is actually implemented ac-
While research on the graph data model (with associated cordingly to support parallel processing.
query languages and academic prototypes) dates back to A free TigerGraph developer edition can be downloaded
the late 1980s, in recent years we have witnessed the rise from the company Web site 1 , together with documentation,
of several products offered by commercial software com- an e-book, white papers, a series of representative analyt-
panies like Neo Technologies (supporting the graph query ics examples reflecting real-life customer needs, as well as
language Cypher [28]) and DataStax [10] (supporting Grem- the results of a benchmark comparing our engine to other
lin [29]). These languages are also supported by the commer- commercial graph products.
cial offerings of many other companies (Amazon Neptune [2], Paper Organization. The remainder of the paper is orga-
IBM Compose for JanusGraph [19], Microsoft Azure Cos- nized as follows. Section 2 overviews TigerGraph’s key de-
mosDB [25], etc.). sign choices, while its architecture is described in Section 3.
We introduce TigerGraph, a new graph database product by We present the DDL and DML in Sections 4 and 5, respec-
the homonymous company. TigerGraph is a native parallel tively. We discuss evaluation complexity in Section 6, BSP
graph database, in the sense that its proprietary storage is interpretations in Section 7, and we conclude in Section 8. In
designed from the ground up to store graph nodes, edges and Appendix B, we report on an experimental evaluation using
White Paper, January, 2019
1 http://tigergraph.com
2019.
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
a benchmark published for reproducibility in TigerGraph’s disciplines, for instance in a breadth-first manner, keeping
GitHub repository. book of visited nodes and pruning traversals upon encoun-
tering them. The standard solution is to assign a temporary
2 OVERVIEW OF TIGERGRAPH’S variable to each node, marking whether it has already been
visited. While such marker-manipulating operations may sug-
NATIVE PARALLEL GRAPH DESIGN
gest low-level, general-purpose programming languages, one
We overview here the main ingredients of TigerGraph’s de- can actually express complex graph traversals in a few lines
sign, showing how they work together to achieve speed and of code (shorter than this paragraph) using TigerGraph’s high-
scalability. level query language.
A Native Distributed Graph. TigerGraph was designed Storage and Processing Engines Written in C++. Tiger-
from the ground up as a native graph database. Its proprietary Graph’s storage engine and processing engine are imple-
data store holds nodes, edges, and their attributes. We decided mented in C++. A key reason for this choice is the fine-grained
to avoid the more facile solution of building a wrapper on control of memory management offered by C++. Careful
top of a more generic NoSQL data store because this virtual memory management contributes to TigerGraph’s ability to
graph strategy incurs a double performance penalty. First, the traverse many edges simultaneously in a single query. While
translation from virtual graph manipulations to physical stor- an alternative implementation using an already provided mem-
age operations introduces overhead. Second, the underlying ory management (like in the Java Virtual Machine) would be
structure is not optimized for graph operations. convenient, it would make it difficult for the programmer to
Compact Storage with Fast Access. TigerGraph is not an optimize memory usage.
in-memory database, because holding all data in memory is a Automatic Partitioning. In today’s big data world, enter-
preference but not a requirement (this is true for the enterprise prises need their database solutions to be able to scale out
edition, but not the free developer edition). Users can set to multiple machines, because their data may grow too large
parameters that specify how much of the available memory to be stored economically on a single server. TigerGraph is
may be used for holding the graph. If the full graph does not designed to automatically partition the graph data across a
fit in memory, then the excess is spilled to disk. cluster of servers to preserve high performance. The hash
Data values are stored in encoded formats that effectively index is used to determine not only the within-server but also
compress the data. The compression factor varies with the the which-server data location. All the edges that connect out
graph structure and data, but typical compression factors are from a given node are stored on the same server.
between 2x and 10x. Compression reduces not only the mem-
ory footprint and thus the cost to users, but also CPU cache Distributed Computing. TigerGraph supports a distributed
misses, speeding up overall query performance. Decompres- computation mode that significantly improves performance
sion/decoding is efficient and transparent to end users, so for analytical queries that traverse a large portion of the graph.
the benefits of compression outweigh the small time delay In distributed query mode, all servers are asked to work on the
for compression/decompression. In general, the encoding is query; each server’s actual participation is on an as-needed
homomorphic [26], that is decompression is needed only for basis. When a traversal path crosses from server A to server
displaying the data. When values are used internally, often B, the minimal amount of information that server B needs
they may remain encoded and compressed. Internally hash to know is passed to it. Since server B already knows about
indices are used to reference nodes and edges. For this reason, the overall query request, it can easily fit in its contribution.
accessing a particular node or edge in the graph is fast, and In a benchmark study, we tested the commonly used PageR-
stays fast even as the graph grows in size. Moreover, main- ank algorithm. This algorithm is a severe test of a graph’s
taining the index as new nodes and edges are added to the computational and communication speed because it traverses
graph is also very fast. every edge, computes a score for every node, and repeats
this traverse-and-compute for several iterations. When the
Parallelism and Shared Values. TigerGraph was built for graph was distributed across eight servers compared to a
parallel execution, employing a design that supports massively single-server, the PageRank query completed nearly seven
parallel processing (MPP) in all aspects of its architecture. times faster (details in Appendix B.3.3). This attests to Tiger-
TigerGraph exploits the fact that graph queries are compat- Graph’s efficient use of distributed infrastructure.
ible with parallel computation. Indeed, the nature of graph
queries is to follow the edges between nodes, traversing mul- Programming Abstraction: an MPP Computation Model.
tiple distinct paths in the graph. These traversals are a natural The low-level programming abstraction offered by Tiger-
fit for parallel/multithreaded execution. Various graph algo- Graph integrates and extends the two classical graph pro-
rithms require these traversals to proceed according to certain gramming paradigms of think-like-a-vertex (a la Pregel [22],
TigerGraph: A Native MPP Graph Database White Paper, January, 2019
and synchronizes different system components to collabora- The attributes for the vertices/edges populating the container
tively finish a task. Topology storage management focuses on are declared using syntax borrowed from SQL’s CREATE TA-
minimizing the memory footprint of the graph topology de- BLE command. In TigerGraph’s model, a graph may contain
scription while maximizing the parallel efficiency by slicing both directed and undirected edges. The same vertex/edge
and compressing the graph into ideal parallel compute and container may be shared by multiple graphs.
storage units. It also provides kernel APIs to answer CRUD
E XAMPLE 1 (DDL). Consider the following DDL state-
requests on the three data sets that we just mentioned.
ments declaring two graphs: LinkedIn and Twitter. Notice
GPE is the parallel engine. It processes UDFs in a bulk syn- how symmetric relationships (such as LinkedIn connections)
chronous parallel (BSP) fashion. GPE manages the system re- are modeled as undirected edges, while asymmetric relation-
sources automatically, including memory, cores, and machine ships (such as Twitter’s following or posting) correspond to
partitions if TigerGraph is deployed in a cluster environment directed edges. Edge types specify the types of source and
(this is supported in the Enterprise Edition). Besides resource target vertices, as well as optional edge attributes (see the
management, GPE is mainly responsible for parallel process- since and end attributes below).
ing of tasks from the task queue. A task could be a custom For vertices, one can declare primary key attributes with
UDF, a standard UDF or a CRUD operation. GPE synchro- the same meaning as in SQL (see the email attribute of
nizes, schedules and processes all these tasks to satisfy ACID Person vertices).
transaction semantics while maximizing query throughput. CREATE VERTEX Person
Details can be found online 2 . (email STRING PRIMARY KEY, name STRING, dob DATE)
Components Not Depicted. There are other miscellaneous CREATE VERTEX Tweet
components that are not depicted in the architecture diagram, (id INT PRIMARY KEY, text STRING, timestamp DATE)
such as a Kafka message service for buffering query requests CREATE DIRECTED EDGE Posts
and data streams, a control console named GAdmin for the (FROM Person, TO Tweet)
system admin to invoke kernel function calls, backup and CREATE DIRECTED EDGE Follows
restore, etc. (FROM Person, TO Person, since DATE)
signifying the debiting of A in favor of B (the amount and and undirected edges in the same graph. We call the resulting
timestamp would be modeled as edge attributes). The debiting path expressions Direction-Aware Regular Path Expressions
action corresponds to a crediting action in the opposite sense, (DARPEs).
from B to A. If the application needs to explicitly keep track
of both credit and debit vocabulary terms, a natural modeling 5.1 Graph Patterns in the FROM Clause
consists in introducing for each Debit edge a reverse Credit GSQL’s FROM clause extends SQL’s basic FROM clause
edge for the same endpoints, with both edges sharing the syntax to also allow atoms of general form
values of the attributes, as in the following example:
CREATE VERTEX Account <GraphName> AS? <pattern>
(number int PRIMARY KEY, balance FLOAT, ...)
CREATE DIRECTED EDGE Debit where the AS keyword is optional, <GraphName> is the
(FROM Account, TO Account, amount float, ...) name of a graph, and <pattern> is a pattern given by a regular
WITH REVERSE EDGE Credit
path expression with variables.
This is in analogy to standard SQL, in which a FROM
5 GSQL DML clause atom
The guiding principle behind the design of GSQL was to
facilitate adoption by SQL programmers while simultane- <TableName> AS? <Alias>
ously flattening the learning curve for novices, especially for
adopters of the BSP programming paradigm [30]. specifies a collection (a bag of tuples) to the left of the AS
To this end, GSQL’s design starts from SQL, extending keyword and introduces an alias to the right. This alias can
its syntax and semantics parsimoniously, i.e. avoiding the be viewed as a simple pattern that introduces a single tuple
introduction of new keywords for concepts that already have variable. In the graph setting, the collection is the graph and
an SQL counterpart. We first summarize the key additional the pattern may introduce several variables. We show more
primitives before detailing them. complex patterns in Section 5.4 but illustrate first with the
following simple-pattern example.
Graph Patterns in the FROM Clause. GSQL extends SQL’s
FROM clause to allow the specification of patterns. Patterns E XAMPLE 2 (Seamless Querying of Graphs and Rela-
specify constraints on paths in the graph, and they also contain tional Tables). Assume Company ACME maintains a human
variables, bound to vertices or edges of the matching paths. In resource database stored in an RDBMS containing a rela-
the remaining query clauses, these variables are treated just tional table “Employee”. It also has access to the “LinkedIn"
like standard SQL tuple variables. graph from Example 1 containing the professional network of
LinkedIn users.
Accumulators. The data found along a path matched by a
The query in Figure 2 joins relational HR employee data
pattern can be collected and aggregated into accumulators.
with LinkedIn graph data to find the employees who have
Accumulators support multiple simultaneous aggregations
made the most LinkedIn connections outside the company
of the same data according to distinct grouping criteria. The
since 2016:
aggregation results can be distributed across vertices, to sup-
Notice the pattern Person:p -(Connected:c)- Person:outsider
port multi-pass and, in conjunction with loops, even iterative
to be matched against the “LinkedIn“ graph. The pattern vari-
graph algorithms implemented in MPP fashion.
ables are “p”, “c” and “outsider”, binding respectively to a
Loops. GSQL includes control flow primitives, in particular “Person” vertex, a “Connected” edge and a “Person” vertex.
loops, which are essential to support standard iterative graph Once the pattern is matched, its variables can be used just like
analytics (e.g. PageRank [6], shortest-paths [13], weakly con- standard SQL tuple aliases. Notice that neither the WHERE
nected components [13], recommender systems, etc.). clause nor the SELECT clause syntax discriminate among
Direction-Aware Regular Path Expressions (DARPEs). aliases, regardless of whether they range over tuples, vertices
GSQL’s FROM clause patterns contain path expressions that or edges.
specify constrained reachability queries in the graph. GSQL The lack of an arrowhead accompanying the edge subpat-
path expressions start from the de facto standard of two-way tern -(Connected: c)- requires the matched “Connected” edge
regular path expressions [7] which is the culmination of a long to be undirected. □
line of works on graph query languages, including reference To support cross-graph joins, the FROM clause allows the
languages like WebSQL [23], StruQL [11] and Lorel [1]. mention of multiple graphs, analogously to how the standard
Since two-way regular path expressions were developed for SQL FROM clause may mention multiple tables.
directed graphs, GSQL extends them to support both directed
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
E XAMPLE 3 (Cross-Graph and -Table Joins). Assume For a comprehensive documentation on GSQL accumula-
we wish to gather information on employees, including how tors, see the developer’s guide at http://docs.tigergraph.com.
many tweets about their company and how many LinkedIn Here, we explain accumulators by example.
connections they have. The employee info resides in a rela-
E XAMPLE 4 (Multiple Aggregations by Distinct Group-
tional table “Employee”, the LinkedIn data is in the graph
ing Criteria). Consider a graph named “SalesGraph” in
named “LinkedIn” and the tweets are in the graph named
which the sale of a product p to a customer c is modeled by
“Twitter”. The query is shown in Figure 3. Notice the join
a directed “Bought”-edge from the “Customer”-vertex mod-
across the two graphs and the relational table.
eling c to the “Product”-vertex modeling p. The number of
Also notice the arrowhead in the edge subpattern ranging
product units bought, as well as the discount at which they
over the Twitter graph, −(Posts >)−, which matches only
were offered are recorded as attributes of the edge. The list
directed edges of type “Posts“, pointing from the “User”
price of the product is stored as attribute of the corresponding
vertex to the “Tweet” vertex. □
“Product” vertex.
We wish to simultaneously compute the sales revenue per
5.2 Accumulators product from the “toy” category, the toy sales revenue per
We next introduce the concept of accumulators , i.e. data con- customer, and the overall total toy sales revenue. 3
tainers that store an internal value and take inputs that are We define a vertex accumulator type for each kind of rev-
aggregated into this internal value using a binary operation. enue. The revenue for toy product p will be aggregated at the
Accumulators support the concise specification of multiple si- vertex modeling p by vertex accumulator revenuePerToy,
multaneous aggregations by distinct grouping criteria, and the while the revenue for customer c will be aggregated at the ver-
computation of vertex-stored side effects to support multipass tex modeling c by the vertex accumulator revenuePerCust.
and iterative algorithms. The total toy sales revenue will be aggregated in a global ac-
The accumulator abstraction was introduced in the Green- cumulator called totalRevenue. With these accumulators,
Marl system [17] and it was adapted as high-level first-class the multi-grouping query is concisely expressible (Figure 4).
citizen in GSQL to distinguish among two flavors: Note the definition of the accumulators using the WITH
• Vertex accumulators are attached to vertices, with each clause in the spirit of standard SQL definitions. Here,
vertex storing its own local accumulator instance. They SumAccum<float> denotes the type of accumulators that
are useful in aggregating data encountered during the hold an internal floating point scalar value and aggregate
traversal of path patterns and in storing the result dis- inputs using the floating point addition operation. Accumu-
tributed over the visited vertices. lator names prefixed by a single @ symbol denote vertex
• Global accumulators have a single instance and are accumulators (one instance per vertex) while accumulator
useful in computing global aggregates. names prefixed by @@ denote a global accumulator (a single
shared instance).
Accumulators are polymorphic, being parameterized by
Also note the novel ACCUM clause, which specifies the
the type of the internal value V , the type of the inputs I , and
generation of inputs to the accumulators. Its first line intro-
the binary combiner operation
duces a local variable “salesPrice”, whose value depends on
⊕ :V ×I →V. 3 Notethat writing this query in standard SQL is cumbersome. It requires
performing two GROUP BY operations, one by customer and one by product.
Accumulators implement two assignment operators. De- Alternatively, one can use the window functions’ OVER - PARTITION
noting with a.val the internal value of accumulator a, BY clause, that can perform the groupings independently but whose output
repeats the customer revenue for each product bought by the customer, and
• a = i sets a.val to the provided input i; the product revenue for each customer buying the product. Besides yielding
• a += i aggregates the input i into acc.val using the an unnecessarily large result, this solution then requires two post-processing
combiner, i.e. sets a.val to a.val ⊕ i. SQL queries to separate the two aggregates.
TigerGraph: A Native MPP Graph Database White Paper, January, 2019
WITH
SumAccum<float> @revenuePerToy, @revenuePerCust, @@totalRevenue
BEGIN
SELECT c
FROM SalesGraph AS Customer: c -(Bought>: b)- Product:p
WHERE p.category = 'toys'
ACCUM float salesPrice = b.quantity * p.listPrice * (100 - b.percentDiscount)/100.0,
c.@revenuePerCust += salesPrice,
p.@revenuePerToy += salesPrice,
@@totalRevenue += salesPrice;
END
attributes found in both the “Bought” edge and the “Product” distinct match of the FROM clause pattern that satisfies the
vertex. This value is aggregated into each accumulator using WHERE clause condition, the ACCUM clause is executed
the “+=” operator. c.@revenuePerCust refers to the precisely once. After the ACCUM clause executions com-
vertex accumulator instance located at the vertex denoted by plete, the multi-output SELECT clause executes each of its
vertex variable c. □ semicolon-separated individual fragments independently, as
standard SQL clauses. Note that we do not specify the or-
Multi-Output SELECT Clause. GSQL’s accumulators al- der in which matches are found and consequently the order
low the simultaneous specification of multiple aggregations of ACCUM clause applications. We leave this to the engine
of the same data. To take full advantage of this capability, implementation to support optimization. The result is well-
GSQL complements it with the ability to concisely specify defined (input-order-invariant) whenever the accumulator’s
simultaneous outputs into multiple tables for data obtained binary aggregation operation ⊕ is commutative and associa-
by the same query body. This can be thought of as evaluating tive. This is certainly the case for addition, which is used in
multiple independent SELECT clauses. Example 4, and for most of GSQL’s built-in accumulators.
E XAMPLE 5 (Multi-Output SELECT). While the query Extensible Accumulator Library. GSQL offers a list of
in Example 4 outputs the customer vertex ids, in that example built-in accumulator types. TigerGraph’s experience with the
we were interested in its side effect of annotating vertices with deployment of GSQL has yielded the short list from Sec-
the aggregated revenue values and of computing the total tion A, that covers most use cases we have encountered in
revenue. If instead we wished to create two tables, one associ- customer engagements. In addition, GSQL allows users to
ating customer names with their revenue, and one associating define their own accumulators by implementing a simple C++
toy names with theirs, we would employ GSQL’s multi-output interface that declares the binary combiner operation ⊕ used
SELECT clause as follows (preserving the FROM, WHERE for aggregation of inputs into the stored value. This leads to
and ACCUM clauses of Example 4). an extensible query language, facilitating the development of
SELECT c.name, c.@revenuePerCust INTO PerCust; accumulator libraries.
t.name, t.@revenuePerToy INTO PerToy Accumulator Support for Multi-pass Algorithms. The
Notice the semicolon, which separates the two simultaneous scope of the accumulator declaration may cover a sequence of
outputs. □ query blocks, in which case the accumulated values computed
by a block can be read (and further modified) by subsequent
Semantics. The semantics of GSQL queries can be given in blocks, thus achieving powerful composition effects. These
a declarative fashion analogous to SQL semantics: for each are particularly useful in multi-pass algorithms.
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
E XAMPLE 6 (Two-Pass Recommender Query). Assume Notice the while loop that runs a maximum number of iter-
we wish to write a simple toy recommendation system for a ations provided as parameter maxIteration. Each vertex
customer c given as parameter to the query. The recommenda- v is equipped with a @score accumulator that computes
tions are ranked in the classical manner: each recommended the rank at each iteration, based on the current score at v
toy’s rank is a weighted sum of the likes by other customers. and the sum of fractions of previous-iteration scores of v’s
Each like by an other customer o is weighted by the sim- neighbors (denoted by vertex variable n). v.@score’ refers
ilarity of o to customer c. In this example, similarity is the to the value of this accumulator at the previous iteration.
standard log-cosine similarity [27], which reflects how many According to the ACCUM clause, at every iteration each
toys customers c and o like in common. Given two customers vertex v contributes to its neighbor n’s score a fraction of v’s
x and y, their log-cosine similarity is defined as loд(1 + current score, spread over v’s outdegree. The score fractions
count of common likes for x and y). contributed by the neighbors are summed up in the vertex
The query is shown in Figure 5. The query header declares accumulator @received_score.
the name of the query and its parameters (the vertex of type As per the POST_ACCUM clause, once the sum of score
“Customer” c, and the integer value k of desired recommen- fractions is computed at v, it is combined linearly with v’s
dations). The header also declares the graph for which the current score based on the parameter dampingFactor,
query is meant, thus freeing the programmer from repeating yielding a new score for v.
the name in the FROM clauses. Notice also that the accu- The loop terminates early if the maximum difference over
mulators are not declared in a WITH clause. In such cases, all vertices between the previous iteration’s score (acces-
the GSQL convention is that the accumulator scope spans all sible as v.@score’) and the new score (now available
query blocks. Query TopKToys consists of two blocks. in v.@score) is within a threshold given by parameter
The first query block computes for each other customer maxChange. This maximum is computed in the @@maxDif-
o their log-cosine similarity to customer c, storing it in o’s ference global accumulator, which receives as inputs the
vertex accumulator @lc. To this end, the ACCUM clause absolute differences computed by the POST_ACCUM clause
first counts the toys liked in common by aggregating for each instantiations for every value of vertex variable v. □
such toy the value 1 into o’s vertex accumulator @inCommon.
The POST_ACCUM clause then computes the logarithm and 5.4 DARPEs
stores it in o’s vertex accumulator @lc. We follow the tradition instituted by a line of classic work on
Next, the second query block computes the rank of each querying graph (a.k.a. semi-structured) data which yielded
toy t by adding up the similarities of all other customers o such reference query languages as WebSQL [23], StruQL [11]
who like t. It outputs the top k recommendations into table and Lorel [1]. Common to all these languages is a primitive
Recommended, which is returned by the query. that allows the programmer to specify traversals along paths
Notice the input-output composition due to the second whose structure is constrained by a regular path expression.
query block’s FROM clause referring to the set of vertices Regular path expressions (RPEs) are regular expressions
OthersWithCommonLikes (represented as a single- over the alphabet of edge types. They conform to the context-
column table) computed by the first query block. Also notice free grammar
the side-effect composition due to the second block’s ACCUM
clause referring to the @lc vertex accumulators computed by rpe → _ | EdgeType | ′(′ rpe ′)′ |rpe ′ ∗′ bounds?
the first block. Finally, notice how the SELECT clause outputs | rpe ′ . ′ rpe | rpe ′ | ′ rpe
vertex accumulator values (t.@rank) analogously to how it bounds → N ? ′ .. ′ N ?
outputs vertex attributes (t.name). □
where EdgeType and N are terminal symbols representing
Example 6 introduces the POST_ACCUM clause, which
respectively the name of an edge type and a natural number.
is a convenient way to post-process accumulator values after
The wildcard symbol “_” denotes any edge type, “.” denotes
the ACCUM clause finishes computing their new aggregate
the concatenation of its pattern arguments, and “|” their dis-
value.
junction. The ‘*’ terminal symbol is the standard Kleene star
5.3 Loops specifying several (possibly 0 or unboundedly many) repeti-
tions of its RPE argument. The optional bounds can specify a
GSQL includes a while loop primitive, which, when com- minimum and a maximum number of repetitions (to the left
bined with accumulators, supports iterative graph algorithms. and right of the “..” symbol, respectively).
We illustrate for the classic PageRank [6] algorithm. A path p in the graph is said to satisfy an RPE R if the
E XAMPLE 7 (PageRank). Figure 6 shows a GSQL query sequence of edge types read from the source vertex of p to the
implementing a simple version of PageRank. target vertex of p spells out a word in the language accepted
TigerGraph: A Native MPP Graph Database White Paper, January, 2019
RETURN Recommended;
}
by R when interpreted as a standard regular expression over DARPEs enable the free mix of edge directions in regular
the alphabet of edge type names. path expressions. For instance, the pattern
DARPEs. Since GSQL’s data model allows for the existence E> .(F> | <G) ∗ . <H .J
of both directed and undirected edges in the same graph, we
refine the RPE formalism, proposing Direction-Aware RPEs matches paths starting with a hop along an outgoing E-edge,
(DARPEs). These allow one to also specify the orientation of followed by a sequence of zero or more hops along either
directed edges in the path. To this end, we extend the alphabet outgoing F -edges or incoming G-edges, next by a hop along
to include for each edge type E the symbols an incoming H -edge and finally ending in a hop along an
undirected J -edge.
• E, denoting a hop along an undirected E-edge,
• E >, denoting a hop along an outgoing E-edge (from 5.4.1 DARPE Semantics.
source to target vertex), and A well-known semantic issue arises from the tension between
• <E, denoting a hop along an incoming E-edge (from RPE expressivity and well-definedness. Regarding expressiv-
target to source vertex). ity, applications need to sometimes specify reachability in the
graph via RPEs comprising unbounded (Kleene) repetitions
Now the notion of satisfaction of a DARPE by a path extends of a path shape (e.g. to find which target users are influenced
classical RPE satisfaction in the natural way. by source users on Twitter, we seek the paths connecting users
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
5.5 Updates one with FROM pattern S:s-(E)- _:x, followed by one with
GSQL supports vertex-, edge- as well as attribute-level mod- FROM pattern _:x -(F)- T:t. Similar transformations can even-
ifications (insertions, deletions and updates), with a syntax tually translate arbitrarily complex DARPEs into sequences
inspired by SQL (detailed in the online documentation at of query blocks whose patterns specify single hops (while
https://docs.tigergraph.com/dev/gsql-ref). loops are needed to implement Kleene repetitions).
For each single-hop block, whenever a pattern S:s -(E)- T:t
is matched, the paths leading to the vertex denoted by s are
kept in a vertex accumulator, say s.@paths. These paths
6 GSQL EVALUATION COMPLEXITY are extended with an additional hop to the vertex denoted by
Due to its versatile control flow and user-defined accumu- t, the extensions are filtered by the desired legality criterion,
lators, GSQL is evidently a Turing-complete language and and the qualifying extensions are added to t.@paths.
therefore it does not make sense to discuss query evaluation Note that this translation can be performed automatically,
complexity for the unrestricted language. thus allowing programmers to specify the desired evaluation
However, there are natural restrictions for which such con- semantics on a per-query basis and even on a per-DARPE
siderations are meaningful, also allowing comparisons to basis within the same query. The current GSQL release does
other languages: if we rule out loops, accumulators and nega- not include this automatic translation, but future releases
tion, but allow DARPEs, we are left essentially with a lan- might do so given sufficient customer demand (which is yet
guage that corresponds to the class of aggregating conjunctive to materialize). Absent such demand, we believe that there
two-way regular path queries [31]. is value in making programmers work harder (and therefore
The evaluation of queries in this restricted GSQL language think harder whether they really want) to specify intractable
fragment has polynomial data complexity. This is due to the semantics.
shortest-path semantics and to the fact that, in this GSQL
fragment, one need not enumerate the shortest paths as it
suffices to count them in order to maintain the multiplicities 7 BSP INTERPRETATION OF GSQL
of bindings. While enumerating paths (even only shortest) GSQL is a high-level language that admits a semantics that
may lead to exponential result size in the size of the input is compatible with SQL and hence declarative and agnostic
graph, counting shortest paths is tractable (i.e. has polynomial of the computation model. This presentation appeals to SQL
data complexity). experts as well as to novice users who wish to abstract away
computation model details.
Alternate Designs Causing Intractability. If we change
However, GSQL was designed from the get-go for full
the path legality criterion (to the defaults from languages such
compatibility with the classical programming paradigms for
as Gremlin and Cypher), tractability is no longer given. An
BSP programming embraced by developers of graph analytics
additional obstacle to tractability is the primitive of binding
and more generally, NoSQL Map/Reduce jobs. We give here
a variable to the entire path (supported in both Gremlin and
alternate interpretations to GSQL, which allow developers
Cypher but not in GSQL), which may lead to results of size
to preserve their Map/Reduce or graph BSP mentality while
exponential in the input graph size, even under shortest-paths
gaining the benefits of high-level specification when writing
semantics.
GSQL queries.
Emulating the Intractable (and Any Other) Path Legal-
ity Variants. GSQL accumulators and loops constitute a
powerful combination that can implement both flavors of the 7.1 Think-Like-A-Vertex/Edge Interpretation
intractable path semantics (non-repeated vertex or edge legal- The TigerGraph graph can be viewed both as a data model
ity), and beyond, as syntactic sugar. That is, without requiring and a computation model. As a computation model, the pro-
further extensions to GSQL. gramming abstraction offered by TigerGraph integrates and
To this end, the programmer can split the pattern into a extends the two classical graph programming paradigms of
sequence of query blocks, one for each single hop in the path, think-like-a-vertex (a la Pregel [22], GraphLab [21] and Gi-
explicitly materializing legal paths in vertex accumulators raph [14]) and think-like-an-edge (PowerGraph [15]). A key
whose contents is transferred from source to target of the difference between TigerGraph’s computation model and that
hop. In the process, the paths materialized in the source ver- of the above-mentioned systems is that these allow the pro-
tex accumulator are extended with the current hop, and the grammer to specify only one function that uniformly aggre-
extensions are stored in the target’s vertex accumulator. gates messages/inputs received at a vertex, while in GSQL
For instance, a query block with the FROM clause pattern one can declare an unbounded number of accumulators of
S:s -(E.F)- T:t is implemented by a sequence of two blocks, different types.
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
Conceptually, each vertex or edge acts as a parallel unit the classical map/reduce paradigm, with A playing the role
of storage and computation simultaneously, with the graph of a reducer. Since for vertex accumulators the aggregation
constituting a massively parallel computational mesh. Each happens at the vertices, we refer to this phase as the vertex
vertex and edge can be associated with a compute function, reduce (VR) function.
specified by the ACCUM clause (vertex compute functions Each GSQL query block thus specifies a variation of Map/Re-
can also be specified by the POST_ACCUM clause). duce jobs which we call EM/VR jobs. As opposed to standard
The compute function is instantiated for each edge/vertex, map/reduce jobs, which define a single map/reduce function
and the instantiations execute in parallel, generating accumu- pair, a GSQL EM/VR job can specify several reducer types
lator inputs (acc-inputs for short). We refer to this compu- at once by defining multiple accumulators.
tation as the acc-input generation phase. During this phase,
the compute function instantiations work under snapshot se-
mantics: the effect of incorporating generated inputs into 8 CONCLUSIONS
accumulator values is not visible to the individual executions TigerGraph’s design is pervaded by MPP-awareness in all as-
until all of them finish. This guarantees that all function in- pects, ranging from (i) the native storage with its customized
stantiations can execute in parallel, starting from the same partitioning, compression and layout schemes to (ii) the exe-
accumulator snapshot, without interference as there are no cution engine architected as a multi-server cluster that min-
dependencies between them. imizes cross-boundary communication and exploits multi-
Once the acc-input generation phase completes, the aggre- threading within each server, to (iii) the low-level graph BSP
gation phase ensues. Here, inputs are aggregated into their programming abstractions and to (iv) the high-level query
accumulators using the appropriate binary combiners. Note language admitting BSP interpretations.
that vertex accumulators located at different vertices can work This design yields noteworthy scale-up and scale-out per-
in parallel, without interfering with each other. formance whose experimental evaluation is reported in Ap-
This semantics requires synchronous execution, with the pendix B for a benchmark that we made avaiable on GitHub
aggregation phase waiting at a synchronization barrier until for full reproducibility 5 . We used this benchmark also for
the acc-input generation phase completes (which in turn waits comparison with other leading graph database systems
at a synchronization barrier until the preceding query block’s (ArangoDB [4], Azure CosmosDB [25], JanusGraph [12],
aggregation phase completes, if such a block exists). It is Neo4j [28], Amazon Neptune [2]), against which TigerGraph
therefore an instantiation of the Bulk-Synchronous-Parallel compares favorably, as detailed in an online report 6 . The
processing model [30]. scripts for these systems are also published on GitHub 7 .
GSQL represents a sweet spot in the trade-off between
abstraction level and expressivity: it is sufficiently high-level
7.2 Map/Reduce Interpretation
to allow declarative SQL-style programming, yet sufficiently
GSQL also admits an interpretation that appeals to NoSQL expressive to specify sophisticated iterative graph algorithms
practitioners who are not specialized on graphs and are trained and configurable DARPE semantics. These are traditionally
on the classical Map/Reduce paradigm. coded in general-purpose languages like C++ and Java and
This interpretation is more evident on a normal form of available only as built-in library functions in other graph
GSQL queries, in which the FROM clause contains a single query languages such as Gremlin and Cypher, with the draw-
pattern that specifies a one-hop DARPE: S:s -(E:e)- T:t, with E back that advanced programming expertise is required for
a single edge type or a disjunction thereof. It is easy to see that customization.
all GSQL queries can be normalized in this way (though such The GSQL query language shows that the choice between
normalization may involve the introduction of accumulators declarative SQL-style and NoSQL-style programming over
to implement the flow of data along multi-hop traversals, and graph data is a false choice, as the two are eminently com-
of loops to implement Kleene repetitions). Once normalized, patible. GSQL also shows a way to unify the querying of
all GSQL queries can be interpreted as follows. relational tabular and graph data.
In this interpretation, the FROM, WHERE and ACCUM
clauses together specify an edge map (EM) function, which
is mapped over all edges in the graph. For those edges that
satisfy the FROM clause pattern and the WHERE condition,
the EM function evaluates the ACCUM clause and outputs a
set of key-value pairs, where the key identifies an accumulator 5 https://github.com/tigergraph/ecosys/tree/benchmark/benchmark/tigergraph
GSQL is still evolving, in response to our experience with [19] IBM. [n. d.]. Compose for JanusGraph.
customer deployment. We are also responding to the experi- https://www.ibm.com/cloud/compose/janusgraph.
ences of the graph developer community at large, as Tiger- [20] Leonid Libkin, Wim Martens, and Domagoj Vrgoc. 2016. Querying
Graphs with Data. J. ACM 63, 2 (2016), 14:1–14:53. https://doi.org/
Graph is a participant in current ANSI standardization work- 10.1145/2850413
ing groups for graph query languages and graph query exten- [21] Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos
sions for SQL. Guestrin, and Joseph M. Hellerstein. 2012. Distributed GraphLab: A
Framework for Machine Learning in the Cloud. PVLDB 5, 8 (2012),
716–727.
REFERENCES [22] Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C.
Dehnert, IIan Horn, Naty Leiser, and Grzegorz Czajkowski. 2010.
[1] Serge Abiteboul, Dallan Quass, Jason McHugh, Jennifer Widom, and Pregel: A System for Large-Scale Graph Processing. In SIGMOD’10.
Janet Wiener. 1997. The Lorel Query Language for Semistructured [23] Alberto O. Mendelzon, George A. Mihaila, and Tova Milo. 1996. Query-
Data. Int. J. on Digital Libraries 1, 1 (1997), 68–88. ing the World Wide Web. In PDIS. 80–91.
[2] Amazon. [n. d.]. Amazon Neptune. https://aws.amazon.com/neptune/. [24] A. O. Mendelzon and P. T. Wood. 1995. Finding regular simple paths in
[3] Renzo Angles, Marcelo Arenas, Pablo Barceló, Peter A. Boncz, George graph databases. SIAM J. Comput. 24, 6 (December 1995), 1235–1258.
H. L. Fletcher, Claudio Gutierrez, Tobias Lindaaker, Marcus Paradies, [25] Microsoft. 2018. Azure Cosmos DB. https://azure.microsoft.com/en-
Stefan Plantikow, Juan F. Sequeda, Oskar van Rest, and Hannes Voigt. us/services/cosmos-db/.
2018. G-CORE: A Core for Future Graph Query Languages. In Pro- [26] Mark A. Roth and Scott J. Van Horn. 1993. Database compression.
ceedings of the 2018 International Conference on Management of Data, ACM SIGMOD Record 22, 3 (Sept 1993), 31–39.
SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018. 1421– [27] Amit Singhal. 2001. Modern Information Retrieval: A Brief Overview.
1432. https://doi.org/10.1145/3183713.3190654 IEEE Data Eng. Bull. 24, 4 (2001), 35–43. http://sites.computer.org/
[4] ArangoDB. 2018. ArangoDB. https://www.arangodb.com/. debull/A01DEC-CD.pdf
[5] The Graph Database Benchmark. 2018. TigerGraph. [28] Neo Technologies. 2018. Neo4j. https://www.neo4j.com/.
https://github.com/tigergraph/ecosys/tree/benchmark/benchmark/tigergraph. [29] Apache TinkerPop. 2018. The Gremlin Graph Traversal Machine and
[6] Sergey Brin, Rajeev Motwani, Lawrence Page, and Terry Winograd. Language. https://tinkerpop.apache.org/gremlin.html.
1998. What can you do with a Web in your Pocket? IEEE Data Eng. [30] Leslie G. Valiant. 2011. A bridging model for multi-core computing.
Bull. 21, 2 (1998), 37–47. http://sites.computer.org/debull/98june/ J. Comput. Syst. Sci. 77, 1 (2011), 154–166. https://doi.org/10.1016/j.
webbase.ps jcss.2010.06.012
[7] Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini, and [31] Peter Wood. 2012. Query Languages for Graph Databases. ACM
Moshe Y. Vardi. 2000. Containment of Conjunctive Regular Path SIGMOD Record 41, 1 (March 2012).
Queries with Inverse. In KR 2000, Principles of Knowledge Represen-
tation and Reasoning Proceedings of the Seventh International Confer-
ence, Breckenridge, Colorado, USA, April 11-15, 2000. 176–185. A GSQL’S MAIN BUILT-IN
[8] R. G.G. Cattell, Douglas K. Barry, Mark Berler, Jeff Eastman, David
Jordan, Craig Russell, Olaf Schadow, Torsten Stanienda, and Fernando ACCUMULATOR TYPES
Velez (Eds.). 2000. The Object Data Management Standard: ODMG GSQL comes with a list of pre-defined accumulators, some
3.0. Morgan Kaufmann.
of which we detail here. For details on GSQL’s accumulators
[9] Peter Chen. 1976. The Entity-Relationship Model - Toward a Unified
View of Data. ACM Transactions on Database Systems 1, 1 (March and more supported types, see the online documentation at
1976), 9–36. https://docs.tigergraph.com/dev/gsql-ref).
[10] DataStax Enterprise. 2018. DataStax. https://www.datastax.com/. SumAccum<N>, where N is a numeric type. This accumu-
[11] Mary F. Fernandez, Daniela Florescu, Alon Y. Levy, and Dan Suciu. lator holds an internal value of type N, accepts inputs of type
1997. A Query Language for a Web-Site Management System. ACM
N and aggregates them into the internal value using addition.
SIGMOD Record 26, 3 (1997), 4–11.
[12] The Linux Foundation. 2018. JanusGraph. http://janusgraph.org/.
[13] A. Gibbons. 1985. Algorithmic Graph Theory. Cambridge University MinAccum<O>, where O is an ordered type. It computes
Press. the minimum value of its inputs of type O.
[14] Apache Giraph. [n. d.]. Apache Giraph. https://giraph.apache.org/.
[15] Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and
MaxAccum<O>, as above, swapping max for min aggre-
Carlos Guestrin. 2012. PowerGraph:Distributed Graph-Parallel Com-
putation on Natural Graphs. In USENIX OSDI. gation.
[16] W3C SparQL Working Group. 2018. SparQL.
[17] Sungpack Hong, Hassan Chafi, Eric Sedlar, and Kunle Olukotun. 2012. AvgAccum<N>, where N is a numeric type. This accu-
Green-Marl: a DSL for easy and efficient graph analysis. In Proceed- mulator computes the average of its inputs of type N. It is
ings of the 17th International Conference on Architectural Support for
implemented in an order-invariant way by internally main-
Programming Languages and Operating Systems, ASPLOS 2012, Lon-
don, UK, March 3-7, 2012. 349–362. https://doi.org/10.1145/2150976. taining both the sum and the count of the inputs seen so far.
2151013
[18] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 2003. Intro- OrAccum, which aggregates its boolean inputs using logi-
duction to automata theory, languages, and computation - international cal disjunction.
edition (2. ed). Addison-Wesley.
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
AndAccum, which aggregates its boolean inputs using log- Table 1: Datasets
ical conjunction.
Name Description Vertices Edges
MapAccum<K,V> stores an internal value of map type, graph500 Synthetic Kronecker 2.4M 67M
where K is the type of keys and V the type of values. V can it- graph1
self be an accumulator type, thus specifying how to aggregate twitter Twitter user-follower 41.6M 1.47B
values mapped to the same key. directed graph2
Name Raw Size TigerGraph Size Duration K-hop Avg RESP Avg N CPU MEM
graph500 967M 482M 56s 1 5.95ms 5128 4.3% 3.8%
twitter 24,375M 9,500M 813s 2 69.94ms 488,723 72.3% 3.8%
3 409.89ms 1,358,948 84.7% 3.7%
6 1.69s 1,524,521 88.1% 3.7%
Hardware. We ran the single-machine experiment on an 9 2.98s 1,524,972 87.9% 3.7%
Amazon EC2 r4.8xlarge instance type. For the single ma-
12 4.25s 1,524,300 89.0% 3.7%
chine scale-up experiment, we used r4.2xlarge, r4.4xlarge
and r4.8xlarge instances. The multi-machine experiment used
Table 5: Twitter - K-hop Query on r4.8xlarge Instance
r4.2xlarge instances to form different-sized clusters.
K-hop Avg RESP Avg N CPU MEM
B.2 Data Loading Test
1 22.60ms 106,362 9.8% 5.6%
Methodology. For both data sets, we used GSQL DDL to 2 442.70ms 3,245,538 78.6% 5.7%
create a graph schema containing one vertex type and one 3 6.64s 18,835,570 99.9% 5.8%
edge type. The edge type is a directed edge connecting the 6 56.12s 34,949,492 100% 7.6%
vertex type to itself. A declarative loading job was written in 9 109.34s 35,016,028 100% 7.7%
the GSQL loading language. There is no pre-(e.g. extracting 12 163.00s 35,016,133 100% 7.7%
unique vertex ids) or post-processing (e.g. index building).
Results Discussion. TigerGraph loaded the twitter data at the queries in under 3 minutes. We note the significance of this
speed of 100G/Hour/Machine (Table 3). TigerGraph automat- accomplishment by pointing out that starting from 6 hops
ically encodes and compresses the raw data to less than half and above, the average number of k-hop-path neighbors per
its original size–Twitter (2.57X compression), Graph500(2X query is around 35 million. In contrast, we have benchmarked
compression). The size of the loaded data is an important 5 other top commercial-grade property graph databases on
consideration for system cost and performance. All else being the same tests, and all of them started failing on some 3-hop
equal, a compactly stored database can store more data on queries, while failing on all 6-hop queries [5]. Also, note that
a given machine and has faster access times because it gets the peak memory usage stays constant regardless of k. The
more cache and memory page hits. The measured peak mem- constant memory footprint enables TigerGraphto follow links
ory usage for loading was 2.5% for graph500 and 7.08% for to unlimited depth to perform what we term “deep-link an-
Twitter, while CPU peak usage was 48.4% for graph500 and alytics”. CPU utilization increases with the hop count. This
60.3% for Twitter. is expected as each hop can discover new neighbors at an
exponential growth rate, thus leading to more CPU usage.
B.3 Querying Test
B. Single machine k-hop scale-up test In this test, we study
B.3.1 Q1. TigerGraph’s scale-up ability, i.e. the ability to increase perfor-
mance with increasing number of cores on a single machine.
A. Single-machine k-hop test. The k-hop-path neighbor Methodology. The same query workload was tested on ma-
count query, which asks for the total count of the vertices chines with different number of cores: for a given k and a
which have a k-length simple path from a starting vertex is a data set, 22 client threads keep sending the 300 k-hop queries
good stress test for graph traversal performance. to a TigerGraph server concurrently. When a query responds,
Methodology. For each data set, we count all k-hop-path the client thread responsible for that query will move on to
endpoint vertices for 300 fixed random seeds sequentially. send the next unprocessed query. We tested the same k-hop
By “fixed random seed", we mean we make a one-time ran- query workloads on three different machines on the R4 family:
dom selection of N vertices from the graph and save this list r4.2xlarge (8vCPUs), r4.4xlarge (16vCPUs), and r4.8xlarge
as repeatable input condition for our tests. We measure the (32vCPUs).
average query response time for k=1,2, 3,6,9,12 respectively. Results Discussion. As shown by Figure 9, TigerGraph lin-
Results Discussion. For both data sets, TigerGaph can an- early scales up the 3-hop query throughput test with the num-
swer deep-link k-hop queries as shown in Tables 4 and 5. ber of cores: 300 queries on the Twitter data set take 5774.3s
For graph500, it can answer 12-hop queries in circa 4 sec- on a 8-core machine, 2725.3s on a 16-core machine, and
onds. For the billion-edge Twitter graph, it can answer 12-hop 1416.1s on a 32-core machine. These machines belong to the
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
Given contexts ctx 1 and ctx 2 = {ni 7→ vi }1≤i ≤k , we say SumAccum<N> is the type of accumulators where the in-
that ctx 2 overrides ctx 1 , denoted ctx 2 ▷ ctx 1 and defined as: ternal value and input have numeric type N/string their
default value is 0/the empty string and the combiner operation
ctx 2 ▷ ctx 1 = c k where is the arithmetic +/string concatenation, respectively.
c0 = ctx 1 , and MinAccum<O> is the type of accumulators where the in-
ci = {ni 7→ vi } ▷ c i−1 , for 1 ≤ i ≤ k ternal value and input have ordered type O (numeric, date-
time, string, vertex) and the combiner operation is the binary
Consistent Contexts. We call two contexts ctx 1 , ctx 2 con- minimum function. The default values are the (architecture-
sistent if they agree on every name in the intersection of dependent) minimum numeric value, the default date, the
their domains. That is, for each x ∈ dom(ctx 1 ) ∩ dom(ctx 2 ), empty string, and undefined, respectively. Analogously for
ctx 1 (x) = ctx 2 (x). MaxAccum<O>.
Merged Contexts. For consistent contexts, we can define AvgAccum<N> stores as internal value a pair consisting
ctx 1 ∪ ctx 2 , which denotes the merged context over the union of the sum of inputs seen so far, and their count. The combiner
of the domains of ctx 1 and ctx 2 : adds the input to the running sum and increments the count.
The default value is 0.0 (double precision).
ctx 1 (n), if n ∈ dom(ctx 1 )
ctx 1 ∪ ctx 2 (n) = AndAccum stores an internal boolean value (default true,
ctx 2 (n), otherwise and takes boolean inputs, combining them using logical con-
junction. Analogously for OrAccum, which defaults to false
D.3 Accumulators and uses logical disjunction as combiner.
A GSQL query can declare accumulators whose names come MapAccum<K,V> stores an internal value that is a map
from Accд , a countable set of global accumulator names and m, where K is the type of m’s keys and V the type of m’s
from Accv , a disjoint countable set of vertex accumulator values. V can itself be an accumulator type, specifying how to
names. aggregate values mapped by m to the same key. The default
Accumulators are data types that store an internal value internal value is the empty map. An input is a key-value pair
and take inputs that are aggregated into this internal value (k, v) ∈ K ×V . The combiner works as follows: if the internal
using a binary operation. GSQL distinguishes among two map m does not have an entry involving key k, m is extended
accumulator flavors: to associate k with v. If k is already defined in m, then if V
• Vertex accumulators are attached to vertices, with each is not an accumulator type, m is modified to associate k to v,
vertex storing its own local accumulator instance. overwriting k’s former entry. If V is an accumulator type, then
• Global accumulators have a single instance. m(k) is an accumulator instance. In that case m is modified
by replacing m(k) with the new accumulator obtained by
Accumulators are polymorphic, being parameterized by the
combining v into m(k) using V ’s combiner.
type S of the stored internal value, the type I of the inputs,
and the binary combiner operation
D.4 Declarations
⊕ : S × I → S. The semantics of a declaration is a function from contexts to
Accumulators implement two assignment operators. Denoting contexts.
with a.val the internal value of accumulator instance a, When they are created, accumulator instances are initial-
ized by setting their stored internal value to a default that is
• a = i sets a.val to the provided input i; defined with the accumulator type. Alternatively, they can
• a += i aggregates the input i into acc.val using the be initialized by explicitly setting this default value using an
combiner, i.e. sets a.val to a.val ⊕ i. assignment:
Each accumulator instance has a pre-defined default for the
internal value.
When an accumulator instance a is referenced in a GSQL type @n = e
expression, it evaluates to the internally stored value a.val.
declares a vertex accumulator of name n ∈ Accv and type
Therefore, the context must associate the internally stored
type, all of whose instances are initialized with [[e]]ctx , the
value to the instannce of global accumulators (identified by
result of evaluating the expression in the current context. The
name) and of vertex accumulators (identified by name and
effect of this declaration is to create the initialized accumula-
vertex).
tor instances and extend the current context ctx appropriately.
Specific Accumulator Types. We revisit the accumulator Note that a vertex accumulator instance is identified by the
types listed in Section A. accumulator name and the vertex hosting the instance. We
TigerGraph: A Native MPP Graph Database White Paper, January, 2019
model this by having the context associate the vertex accumu- where, as usual [18], ϵ denotes the empty word.
lator name with a map from vertices to instances.
DARPE Satisfaction. We denote with
[[type @n = e]](ctx) = ctx ′
Σ {>, < } =
Ø
{t, t >, < t }
where t ∈Te
Ø
ctx ′ = {@n 7→ {v 7→ [[e]]ctx }} ▷ ctx the set of symbols obtained by treating each edge type t as a
v ∈V symbol, as well as creating new symbols by concatenating t
Similarly, and >, as well as < and t.
type @@n = e We say that path p satisfies DARPE D, denoted p |= D, if
declares a global accumulator named n ∈ Accд of type type, λ(p) is a word in the language accepted by D, L(D), when
whose single instance is initialized with [[expr ]]ctx . This in- viewed as a regular expression over the alphabet Σ {>, < } [18]:
stance is identified in the context simply by the accumulator p |= D ⇔ λ(p) ∈ L(D).
name:
DARPE Match. We say that DARPE D matches path p (and
[[type @@n = e]](ctx) = {@@n 7→ [[e]]ctx } ▷ ctx. that p is a match for D) whenever p is a shortest path that
Finally, global variable declarations also extend the context: satisfies D:
[[baseType var = e]](ctx) = {var 7→ [[e]]ctx } ▷ ctx. p matches D ⇔ p |= D ∧ |p| = min |q|.
q |=D
D.5 DARPE Semantics We denote with [[D]]G the set of matches of a DARPE D in
DARPEs specify a set of paths in the graph, formalized as graph G.
follows.
D.6 Pattern Semantics
Paths. A path p in graph G is a sequence Patterns consist of DARPEs and variables. The former spec-
p = v 0 , e 1 , v 1 , e 2 , v 2 , . . . , vn−1 , en , vn ify a set of paths in the graph, the latter are bound to ver-
tices/edges occuring on these paths.
where v 0 ∈ V and for each 1 ≤ i ≤ n,
A pattern P specifies a function from a graph G and a
• vi ∈ V , and context ctx to
• ei ∈ E, and
• a set [[P]]G,ctx of paths in the graph, each called a match
• vi−1 and vi are the endpoints of edge ei regardless of
of P, and
ei ’s orientation: (vi−1 , vi ) ∈ st(ei ) or (vi , vi−1 ) ∈ st(ei ). p
• a family {β P }p ∈[[P ]]G, ctx of bindings for P’s variables,
Path Length. We call n the length of p and denote it with p
one for each match p. Here, β P denotes the binding
|p|. Note that when p = v 0 we have |p| = 0. induced by match p.
Path Source and Target. We call v 0 the source, and vn the Temporary Tables and Vertex Sets. To formalize pattern
target of path p, denoted src(p) and tgt(p), respectively. semantics, we note that some GSQL query statments may
When n = 0, src(p) = tgt(p) = v 0 . construct temporary tables that can be referred to by subse-
Path Hop. For each 1 ≤ i ≤ n, we call the triple (vi−1 , ei , vi ) quent statements. Therefore, among others, the context maps
the hop i of path p. the names to the extents (contents) of temporary tables. Since
we can model a set of vertices as a single-column, duplicate-
Path Label. We define the label of hop i in p, denoted λi (p) free table containining vertex ids, we refer to such tables as
as follows: vertex sets.
τe (ei ) , if ei is undirected, V-Test Match. Consider graph G and a context ctx. Given
λi (p) = τe (ei ) >, if ei is directed from vi−1 to vi ,
< τe (ei ), if ei is directed from vi to vi−1
a v-test VT , a match for VT is a vertex v ∈ V (a path of
length 0) such that v belongs to VT (if VT is a vertex set
where τe (ei ) denotes the type name of edge ei , and τe (ei ) > name defined in ctx), or v is a vertex of type VT (otherwise).
denotes a new symbol obtained by concatenating τe (ei ) with We denote the set of all matches of VT against G in context
>, and analogously for < τe (ei ). ctx with [[VT ]]G,ctx .
We call label of p, denoted λ(p), the word obtained by v ∈ ctx(VT ), if VT is a vertex set
concatenating the hop labels of p:
defined in ctx,
v ∈ [[VT ]]G,ctx ⇔
ϵ , if |p| = 0, τ = VT , if VT ∈ dom(τv ) i.e.
v (v)
λ(p) =
λ 1 (p)λ 2 (p) . . . λ |p | (p), if |p| > 0
VT is a defined vertex type
White Paper, January, 2019 Deutsch, Xu, Wu and Lee
Variable Bindings. Given graph G and tuple of variables x, Consistent Matches. Given two patterns P 1 , P 2 with matches
a binding for x in G is a function β from the variables in x to p1 , p2 respectively, we say that p1 , p2 are consistent if the
vertices or edges in G, β : x → V ∪ E. Notice that a variable bindings they induce are consistent. Recall that bindings are
binding (binding for short) is a particular kind of context, contexts so they inherit the notion of consistency.
hence all context-specific definitions and operators apply. In
particular, the notion of consistent bindings coincides with Conjunctive Pattern Match. Given a conjunctive pattern
that of consistent contexts. P = P 1 , P2 , . . . , Pn , a match of P is a tuple of paths p =
p1 , p2 , . . . , pn such that pi is a match of Pi for each 1 ≤ i ≤ n
Binding Tables. We refer to a bag of variable bindings as a and all matches are pairwise consistent. p induces a binding
binding table. on all variables occuring in P,
No-hop Path Pattern Match. Given a graph G and a context n
Ø
ctx, we say that path p is a match for no-hop pattern P = VT : p
βP =
p
β Pii .
x (and we say that P matches p) if p ∈ [[VT ]]G,ctx . Note that p i=1
is a path of length 0, i.e. a vertex v. The match p induces a
p D.7 Atom Semantics
binding of vertex variable x to v, β P = {x 7→ v}. 10
Recall that the FROM clause comprises a sequence of rela-
One-hop Path Pattern Match. Recall that in a one-hop
tional or graph atoms. Atoms specify a pattern and a collection
pattern
to match it against. Each match induces a variable binding.
P = S : s − (D : e) − T : t,
Note that multiple distinct matches of a pattern can induce
D is a disjunction of direction-adorned edge types, D ⊆ the same variable binding if they only differ in path elements
Σ {>, < } , S,T are v-tests, s, t are vertex variables and e is an that are not bound to the variables. To preserve the multiplici-
edge variable. We say that (single-edge) path p = v 0 , e 1 , v 1 ties of matches, we define the semantics of atoms a function
is a match for P (and that P matches p) if v 0 ∈ [[S]]G,ctx , and from contexts to binding tables, i.e. bags of bindings for the
p ∈ [[D]]G , and v 1 ∈ [[T ]]G,ctx . The binding induced by this variables introduced in the pattern.
p p
match, denoted β P , is β P = {s 7→ v 0 , t 7→ v 1 , e 7→ e 1 }. Given relational atom T AS x where T is a table name and
Multi-hop Single-DARPE Path Pattern Match. Given a x a tuple variable, the matches of x are the tuples t from T ’s
DARPE D, path p is a match for multi-hop path pattern extent, t ∈ ctx(T ), and they each induce a binding β xt = {x 7→
P = S : s − (D) − T : t (and P matches p) if p ∈ [[D]]G , t }. The semantics of T AS x is the function
and src(p) ∈ [[S]]G,ctx and tgt(p) ∈ [[T ]]G,ctx . The match Ú
p
induces a binding β P = {s 7→ src(p), t 7→ tgt(p)}. [[T AS x]]ctx = {|β xt |} .
t ∈ctx(T )
Multi-DARPE Path Pattern Match. Given DARPEs {D i }1≤i ≤n ,
a match for path pattern Here, {|x |} denotes theÒsingleton bag containing element x
with multiplicity 1, and denotes bag union, which is multiplicity-
P = S 0 : s 0 − (D 1 ) − S 1 : s 1 − . . . − (D n ) − Sn : sn preserving (like SQL’s UNION ALL operator).
is a tuple of segments (p1 , p2 , . . . , pn ) of a path p = p1p2 . . . pn Given graph atom G AS P where P is a conjunctive pattern,
such that p ∈ [[D 1 .D 2 . . . . .D n ]]G , and for each 1 ≤ i ≤ n: pi ∈ its semantics is the function
[[Si−1 : si−1 − (D i ) − Si : si ]]G,ctx . Notice that p is a shortest Ú p
path satisfying the DARPE obtained by concatenating the [[G AS P]]ctx = {|β P |} .
DARPEs of the pattern. Also notice that there may be multiple p ∈[[P ]]ctx(G ), ctx
matches that correspond to distinct segmenations of the same
When the graph is not explicitly specified in the atom, we use
path p. Each match induces a binding
the default graph DG, which is guaranteed to be set in the
n
p .....pn
Ø p context:
βP1 = β S i :s −(D )−S :s .
i −1 i −1 i i i
Ú p
i=1 [[P]]ctx = {|β P |} .
Notice that the individual bindings induced by the segments p ∈[[P ]]ctx(DG ), ctx
are pairwise consistent, since the target vertex of a path seg-
ment coincides with the source of the subsequent segment. D.8 FROM Clause Semantics
10 Note that both V T and x are optional. If V T is missing, then it is trivially
The FROM clause specifies a function from contexts to (con-
satisfied by all vertices. If x is missing, then the induced binding is the emtpy text, binding table) pairs that preserves the context:
p
map β P = { }. These conventions apply for the remainder of the presentation
and are not repeated explicitly. [[FROM a 1 , . . . , an ]](ctx)
TigerGraph: A Native MPP Graph Database White Paper, January, 2019
n
Ú Ø Terms of form @@A′ evaluate to the prior value of the global
= (ctx, {| βi |}).
accumulator instance, as recorded in the context:
β 1 ∈ [[a 1 ]]ctx , . . . , βn ∈ [[an ]]ctx , i=1
The requirement that all βi s be pairwise consistent ad- Terms of form x .@A evaluate to the value of the vertex ac-
dresses the case when atoms share variables (thus implement- cumulator instance located at the vertex denoted by variable
ing joins). The consistency of each βi with ctx covers the case x:
when the pattern P mentions variables defined prior to the [[x .@A]]ctx = ctx(@A)(ctx(x)).
current query block. These are guaranteed to be defined in Terms of form x .@A′ evaluate to the prior value of the vertex
ctx if the query passes the semantic checks. accumulator instance located at the vertex denoted by variable
x:
D.9 Term Semantics [[x .@A′]]ctx = ctx(@A′)(ctx(x)).
Note that GSQL terms conform to SQL syntax, extended to
also allow accumulator- and type-specific terms of form Analogously for global accumulators:
Ø Ú
η vacc
′
= {(v, A) 7→ η vacc (v, A)}.
v ∈V , A vertex acc name β ∈ B,
{| |} {| |}
(_, _, η vacc ) = [[ss]]η (β ▷ ctx, η gacc , η vacc )
Figure 11: The Semantics of the Acc-Input Generation Phase Is a Function [[]]η
where
ctx ′ = [[d 1 ]] ◦ . . . ◦ [[dm ]] ◦ [[s 1 ]] ◦ . . . ◦ [[sk ]]
({p1 7→ a 1 , . . . , pn 7→ an } ▷ ctx)
For the common special case when the query runs against
a single graph, we support a version in which this graph is
mentioned in the query declaration and treated as the default
graph, so it does not need to be explictly mentioned in the
patterns.