Lin Hierarchy CACM

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

A Linearizability-based Hierarchy for

Concurrent Specifications
Armando Castañeda Sergio Rajsbaum Michel Raynal
Instituto de Matemáticas, UNAM Instituto de Matemáticas, UNAM IRISA, University of Rennes, France
Mexico City, Mexico Mexico City, Mexico & Polytechnic University, Hong Kong
armando.castaneda@im.unam.mx rajsbaum@im.unam.mx raynal@irisa.fr
Linearizability is the standard approach to arguing the safety prop- A sequential specification of an object defines the behavior of the
erties of a sequentially specified object. However, some concurrent object in executions where operations are invoked sequentially,
objects do not have sequential specifications, or specifying them se- one after the other, either by the same process or by different
quentially causes performance degradation. An overview is given processes. It is formally defined by an automaton and a set of
of two linearizability-style notions, set-linearizability and interval- operations that can be used to manipulate the object. Each tran-
linearizability, that can be used to argue the safety properties of pro- sition of the automaton is labeled with an operation invocation
gressively more general concurrent objects without losing the com- op(𝑣) → 𝑣 ′ with a parameter 𝑣, and its response, 𝑣 ′ . If the transi-
posability, nonblockingness and state-based benefits of linearizability. tion starts in state 𝑠 and ends in state 𝑠 ′ , it means that if operation
The presentation includes several examples of current importance such op(𝑣) is invoked when the object is in state 𝑠, the response to
as lattice agreement, Java’s exchanger, relaxed queues and batched the operation would be 𝑣 ′ , and the object would move to state 𝑠 ′ .
counters. It is common to assume that the automaton has a single initial
state, and that any operation can be invoked in any state of the
1 INTRODUCTION object, namely the specification is total.
In the early 2000’s the multicore revolution began, when it became Thus, a sequential specification defines a set of valid sequential
difficult to increase the clock speed of microprocessors, and manu- executions, i.e., each such execution 𝑆 is a finite sequence of
facturers shifted to the approach of increasing performance through operation invocations together with the corresponding responses,
multiple processing cores per chip. The “free lunch” of relying on starting in the initial state of the automaton.
increasing clock speed to obtain faster programs was over. Unfortu-
nately, developing multi-process programs can be dramatically more Sidebar 1: Sequential specifications.
difficult. The difficulty of reasoning about many things happening
at the same time is compounded by the fact that processes are exe-
cuted in a highly asynchronous and unpredictable way, possibly even serializing the two reservations. Furthermore it may be acceptable
crashing, and furthermore, affected by low level architectural details. to let both reservations happen at the same time from the users’ per-
An approach that has been strongly advocated to cope with these spective, namely, none of them went first. Linearizability precludes
difficulties was elegantly presented by Shavit [36]: such concurrent problems from being implemented in a distributed
system simply because, by definition, linearizability is a tool to show
It is infinitely easier and more intuitive for us humans
that a concurrent algorithm implements a problem specified through
to specify how abstract data structures behave in a se-
a sequential specification (see Sidebars 1 and 2). In the case of the
quential setting, where there are no interleavings. Thus,
ticket reservation example, a sequential version would be a queue. In
the standard approach to arguing the safety proper-
what sense is a queue a “sequential version” of the data structure of
ties of a concurrent data structure is to specify the
the example? This is an interesting philosophical question in its own,
structure’s properties sequentially.
in any case, a sequential version may artificially change the semantics
Providing the illusion of a sequential computation from the users’ of the problem.
perspective has been used since the early seminal works that paved the There are examples of distributed problems that do not have any
way of modern distributed systems, e.g., by Lamport [27], as well as sequential version that could even remotely mimic their behavior. Con-
in the subsequent advances in defining general, practical correctness sider Java’s Exchanger object, which allows two processes (threads)
conditions, most notably, linearizability introduced by Herlihy and to atomically exchange a value, if invocations are concurrent. In a
Wing [24]. The successful history of reducing the complexity of sequential version, the object would always return that the exchange
concurrency through sequential thinking spans over half a century, failed to take place.
but may be reaching its limits [31].
Second limitation: the penalty of sequential specifications. The
First limitation: inherently concurrent problems. It is not clear that second reason for doubting the axiom of illusion of sequentiality is
providing users with the illusion of a sequential computation should that linearizable implementations of sequential specifications may
go as far as implementing solutions only to sequential problems. It is be expensive, or even impossible to implement. A classic result is
not obvious because some problems are inherently concurrent. Con- the impossibility of solving consensus by asynchronous processes
sider for example a ticket reservation system, say for seats at a concert. that may crash, using only simple read/write primitives [15, 28]. It is
There is nothing wrong with taking care of two reservations of dif- impossible to build concurrent implementations of some of the classic
ferent seats concurrently, and actually it may be more efficient than sequential specifications (e.g. sets, queues, stacks) that completely
2

Let 𝑂 be a sequentially specified object, and consider any con- particularly attractive for reasoning about nonblocking implementa-
current execution (also called history or trace in the literature) tions (see Sidebar 3). Finally, in any linearizable implementation there
where processes call operations on 𝑂. Every operation call spans is a well-defined notion of the state of the system at any time, which
an interval of time, from the moment the invocation of the opera- in turn facilitates writing correctness proofs, as discussed in [22].
tion is issued, to the moment a value is returned to the process
The universe of concurrent object specifications. Numerous cor-
that invoked the operation.
rectness conditions have been proposed over the years. More re-
Conceptually, linearizability states that every operation call ap-
cently, algorithms implementing concurrent objects have been adapted
pears as if it takes effect “instantaneously” at a unique point in
to cope with multicore processors with relaxed memory architec-
time between its invocation and response (no two operations
tures, requiring new correctness conditions. For example, Viotti and
taking place at the same time). A linearization of an execution is
Vukolić [39] present a formal framework for defining correctness con-
thus defined by this sequence of points; it is a sequential execu-
ditions for multicore architectures, covering both standard conditions
tion 𝑆 with all operation invocations of the concurrent execution,
for totally ordered memory and newer conditions for relaxed mem-
each one together with its response. Notice that if an operation
ory. Yet, the sequential paradigm is so entranced that correctness of
op1() terminates before an operation op2() starts, i.e., the re-
concurrent implementations is understood in terms of conditions that
sponse of op1() happens before the invocation of op2(), then
determine relationships between concurrent executions of an imple-
op1() must occur before op2() in the sequential execution 𝑆.
mentation and sequential executions of the object being implemented.
Namely, linearizability requires 𝑆 to respect the real-time order
Even recently proposed correctness conditions for recoverable objects
in the concurrent execution. Finally, linearizability requires that
in multicore architectures with durable persistent memory are based
the operation responses are valid according to the sequential
on sequential specifications, e.g., strict linearizability [1], recoverable
specification of 𝑂, that is, there exists a linearization 𝑆 that is a
linearizability [4] and durable linearizability [25].
valid sequential execution of 𝑂.
<latexit sha1_base64="5H0siNsj1ami1SX5rcbARmFB6dk=">AAAB9HicbVDLTgJBEOzFF+IL9ehlIjHBC9kl+DiSePGIiTwS2JDZYRYmzMyuM7MkZMN3ePGgMV79GG/+jQPsQcFKOqlUdae7K4g508Z1v53cxubW9k5+t7C3f3B4VDw+aekoUYQ2ScQj1QmwppxJ2jTMcNqJFcUi4LQdjO/mfntClWaRfDTTmPoCDyULGcHGSn7a0yGKEz2alauX/WLJrbgLoHXiZaQEGRr94ldvEJFEUGkIx1p3PTc2foqVYYTTWaGXaBpjMsZD2rVUYkG1ny6OnqELqwxQGClb0qCF+nsixULrqQhsp8BmpFe9ufif101MeOunTMaJoZIsF4UJRyZC8wTQgClKDJ9agoli9lZERlhhYmxOBRuCt/ryOmlVK951pfZQK9WvsjjycAbnUAYPbqAO99CAJhB4gmd4hTdn4rw4787HsjXnZDOn8AfO5w/zvJGI</latexit> <latexit sha1_base64="R27WVw/5l1o9nrBsJg7LRImgXr0=">AAAB9HicbVDLTgJBEOzFF+IL9ehlIjHBC9lVfBxJvHjERB4JbMjsMAsTZmfWmVkSsuE7vHjQGK9+jDf/xgH2oGAlnVSqutPdFcScaeO6305ubX1jcyu/XdjZ3ds/KB4eNbVMFKENIrlU7QBrypmgDcMMp+1YURwFnLaC0d3Mb42p0kyKRzOJqR/hgWAhI9hYyU+7OkRxoofT8uV5r1hyK+4caJV4GSlBhnqv+NXtS5JEVBjCsdYdz42Nn2JlGOF0WugmmsaYjPCAdiwVOKLaT+dHT9GZVfoolMqWMGiu/p5IcaT1JApsZ4TNUC97M/E/r5OY8NZPmYgTQwVZLAoTjoxEswRQnylKDJ9Ygoli9lZEhlhhYmxOBRuCt/zyKmleVLzrSvWhWqpdZXHk4QROoQwe3EAN7qEODSDwBM/wCm/O2Hlx3p2PRWvOyWaO4Q+czx/1QZGJ</latexit>

Concurrent specifications. The desire for truly concurrent seman-


push(2) push(3)
<latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

p1
tics is old, going back to Lamport [27], where a specification of a
concurrent object is simply the set of all the concurrent executions
<latexit sha1_base64="W47xtZIPFjSV+4fdIZSibADbl+c=">AAAB9HicbVDJSgNBEK1xjXGLevTSGIR4CTMSl2PAi8cIZoFkCD2dmqRJT8/Y3RMIQ77DiwdFvPox3vwbO8tBEx8UPN6roqpekAiujet+O2vrG5tb27md/O7e/sFh4ei4oeNUMayzWMSqFVCNgkusG24EthKFNAoENoPh3dRvjlBpHstHM07Qj2hf8pAzaqzkZx0dkiTVg0nJu+gWim7ZnYGsEm9BirBArVv46vRilkYoDRNU67bnJsbPqDKcCZzkO6nGhLIh7WPbUkkj1H42O3pCzq3SI2GsbElDZurviYxGWo+jwHZG1Az0sjcV//PaqQlv/YzLJDUo2XxRmApiYjJNgPS4QmbE2BLKFLe3EjagijJjc8rbELzll1dJ47LsXZcrD5Vi9WoRRw5O4QxK4MENVOEealAHBk/wDK/w5oycF+fd+Zi3rjmLmRP4A+fzB/I3kYc=</latexit>

push(1) pop() ! 1
<latexit sha1_base64="sx6Jbjk5O/VlyRz03cJ1/gdwvcQ=">AAACAnicbVDLSgMxFM34rPU16krcBItQN2VG6mNZcOOygn1AZyiZNNOGZpKQZJQyFDf+ihsXirj1K9z5N6btLLT1wIXDOfdy7z2RZFQbz/t2lpZXVtfWCxvFza3tnV13b7+pRaowaWDBhGpHSBNGOWkYahhpS0VQEjHSiobXE791T5Smgt+ZkSRhgvqcxhQjY6Wue5gFOoZSyHH5tBgo2h8YpJR4gH7XLXkVbwq4SPyclECOetf9CnoCpwnhBjOkdcf3pAkzpAzFjIyLQaqJRHiI+qRjKUcJ0WE2fWEMT6zSg7FQtriBU/X3RIYSrUdJZDsTZAZ63puI/3md1MRXYUa5TA3heLYoThk0Ak7ygD2qCDZsZAnCitpbIR4ghbCxqRVtCP78y4ukeVbxLyrV22qpdp7HUQBH4BiUgQ8uQQ3cgDpoAAwewTN4BW/Ok/PivDsfs9YlJ585AH/gfP4AFROWjg==</latexit>

that are considered correct. Reasons for the desire of concurrent spec-
<latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

p2 ifications have been argued since at least the seminal work of Mon-
tanari [29]: concurrent specifications are more informative, testing
pop() ! 2 pop() ! 3 sequences defining partial orderings may carry the same information
<latexit sha1_base64="FmwW7duKBnarfrnAjymvr4GZEJo=">AAACAnicbVDLSsNAFJ3UV42vqCtxM1iEuilJqY9lwY3LCvYBTSiT6aQdOpkJMxOlhOLGX3HjQhG3foU7/8Zpm4W2HrhwOOde7r0nTBhV2nW/rcLK6tr6RnHT3tre2d1z9g9aSqQSkyYWTMhOiBRhlJOmppqRTiIJikNG2uHoeuq374lUVPA7PU5IEKMBpxHFSBup5xxlvopgIpJJ+cz2JR0MNZJSPMBqzym5FXcGuEy8nJRAjkbP+fL7Aqcx4RozpFTXcxMdZEhqihmZ2H6qSILwCA1I11COYqKCbPbCBJ4apQ8jIU1xDWfq74kMxUqN49B0xkgP1aI3Ff/zuqmOroKM8iTVhOP5oihlUAs4zQP2qSRYs7EhCEtqboV4iCTC2qRmmxC8xZeXSata8S4qtdtaqX6ex1EEx+AElIEHLkEd3IAGaAIMHsEzeAVv1pP1Yr1bH/PWgpXPHII/sD5/ABaXlo8=</latexit> <latexit sha1_base64="xI0jgptCv5kEEGHB/OihPAnS0vU=">AAACAnicbVDLSsNAFJ3UV42vqCtxM1iEuimJ1sey4MZlBfuAJpTJdNIOncyEmYlSQnHjr7hxoYhbv8Kdf+O0zUJbD1w4nHMv994TJowq7brfVmFpeWV1rbhub2xube84u3tNJVKJSQMLJmQ7RIowyklDU81IO5EExSEjrXB4PfFb90QqKvidHiUkiFGf04hipI3UdQ4yX0UwEcm4fGL7kvYHGkkpHuBZ1ym5FXcKuEi8nJRAjnrX+fJ7Aqcx4RozpFTHcxMdZEhqihkZ236qSILwEPVJx1COYqKCbPrCGB4bpQcjIU1xDafq74kMxUqN4tB0xkgP1Lw3Ef/zOqmOroKM8iTVhOPZoihlUAs4yQP2qCRYs5EhCEtqboV4gCTC2qRmmxC8+ZcXSfO04l1UqrfVUu08j6MIDsERKAMPXIIauAF10AAYPIJn8ArerCfrxXq3PmatBSuf2Qd/YH3+ABgblpA=</latexit>

<latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

p3 as an exponentially larger number of interleaving traces. Another rea-


son is that in truly concurrent models the existing fine parallelism of
Linearization
points the application is fully specified. In some situations, truly concurrent
semantics is actually the most natural, as in Petri nets. Other important
State of the stack 2
after the operation 1 1 1 3 works presenting concurrent semantics have been proposed (e.g. [14?
]).
Consider a stack accessed by three processes, 𝑝 1 , 𝑝 2 and 𝑝 3 . Linearizability-based concurrent specifications. The Linearizabil-
An example of a concurrent execution is depicted in the first ity framework (see Sidebars 1 and 2) can be progressively extended
three horizontal lines of the figure, and a possible sequence of to specify more concurrent objects, as illustrated in Figure 1. The
operations 𝑆 is represented as points in the fourth horizontal line. automaton used in a sequential specification is modified to specify
The reader can verify that 𝑆 indeed defines a linearization of the set-sequential objects, where a transition can be labeled with more
execution, assuming that initially the stack is empty. than one operation taking place at the same time. Further, interval-
sequential objects are defined with an automaton where operations
Sidebar 2: Recalling the notion of linearization. overlap in arbitrary ways. The aim is to define concurrent specifi-
cations while preserving the notion of state. Set-linearizability and
interval-linearizability, the associated correctness conditions, define
eliminate the use of expensive synchronization primitives [5]. Finally, a way of associating a concurrent execution with a concurrent spec-
there are formal complexity lower bounds for some sequential objects, ification, either of a set-sequential or an interval-sequential object.
e.g. [11]. As a result, distributed system designers in some cases have Intuitively, we move from linearization points, to linearization sets of
to give up either the idealized goals of scalability and availability, or points, and more generally to linearization intervals, representing the
relax the consistency linearizability requirement. overlapping in time among concurrently executed operations.
Three benefits of linearizability. In spite of these limitations, pro- The goal of this paper is to describe set-linearizability and interval-
grammers are afraid of relaxing consistency for very real and concrete linearizability, stressing that the benefits of linearizability are kept:
technical reasons. Building large systems demands composability, both are composable, state-based and nonblocking conditions. First,
and up to now linearizability is the de facto standard because it allows set-linearizability [20, 30], where more than one operation can be
us to do so: it is sufficient to prove that each of the implementations of linearized at the same point. Then, interval-linearizability [8], where
concurrent objects is linearizable to guarantee that the whole system operations can overlap in arbitrary ways. In fact, interval-linearizable
of multiple objects is also linearizable. Additionally, linearizability is specifications have been shown to be the most expressive ones [17].
1 INTRODUCTION
3

Execution Linearizability is a nonblocking consistency condition for se-


quential specifications that are total. Intuitively, this means that
linearizability interval-linearizability linearizability, by itself, never requires to block a process (maybe
because of the need to use locks) to satisfy the linearizability
set-linearizability requirements. More formally, it states that a pending invocation
of an operation is never inherently required to wait for another
pending invocation to complete. This property opens the pos-
Sequential Set-Sequential Interval-Sequential
Objects ⇢ Objects ⇢ Objects
sibility of linearizable implementations satisfying nonblocking
progress conditions:
• In a wait-free implementation, every process is guaran-
Figure 1: A linearizability-based hierarchy.
teed to complete its operations in a finite number of steps,
independently of the behavior of other processes.
Organization. Section 2 presents an overview of the linearizability- • In a lock-free implementation, some process is guaran-
based hierarchy, including as an example lattice agreement, an abstrac- teed to complete its operations within a bounded number
tion that has been useful to implement some replicated state machines of steps.
and blockchains. Section 3 discusses in more detail set-linearizability, • In an obstruction-free implementation, a process is guar-
including two examples: Java’s exchanger and a relaxed queue. Sec- anteed to complete its operations in the absence of con-
tion 4 discusses interval-linearizability, with a detailed example of a tention, i.e., when all other processes stop executing op-
batched counter, used to efficiently count events in big data processing erations.
systems.
Sidebar 3: The nonblocking property and nonblocking progress
2 SPECIFYING AND IMPLEMENTING A conditions.
CONCURRENT OBJECT
The lattice agreement problem has been actively investigated recently. object. A transition from 𝑠 to 𝑠 ′ would be labeled with 𝑝𝑟𝑜𝑝𝑜𝑠𝑒 (𝑣) →
Unlike consensus, it is implementable in an asynchronous system 𝑣 ′ , meaning that if the object is in state 𝑠, and a process that invokes
where processes can fail. Several papers have presented replicated 𝑝𝑟𝑜𝑝𝑜𝑠𝑒 (𝑣) could gets back 𝑣 ′ in case the object moves to state 𝑠 ′ .
state machine implementations based on lattice agreement (e.g. [12, The first challenge is to come up with such a sequential automaton,
40]), instead of the usual consensus-based implementations. Recently, which would be a formal specification of the above informal list of
Kuznetsov, Rieutord and Tucci-Piergiovanni [26] showed how to requirements. A natural one would identify each state with a pair of
implement re-configurable lattice agreement, and explained how it subsets of 𝐿, to remember all the elements that have been proposed
can directly be used to obtain re-configurable versions of several and those that have been returned so far. The initial state is 𝑠 0 = (∅, ∅),
sequential types such as max-register, conflict detector, and in fact, and for the above transition from 𝑠 = (𝑠 1, 𝑠 2 ) to 𝑠 ′ , we would have
any state-based commutative abstract data type (ADT). that 𝑠 ′ = (𝑠 1 ∪ 𝑣, 𝑠 2 ∪ 𝑣 ′ ). The reader can easily complete the formal
A join-semilattice is a tuple (𝐿, ⊑), where 𝐿 is a set partially or- specification of the sequential automaton.
dered by the binary relation ⊑, such that for all elements of 𝑥, 𝑦 ∈ 𝐿,
there exists a least upper bound, called join. For the purposes of this Identifying correct implementations. The second challenge is to
exposition, we take 𝐿 as the set with all finite sets of natural numbers check the correctness of an execution of the object against the sequen-
and ⊑ as the subset relation, with the join of 𝑥, 𝑦 being 𝑥 ∪ 𝑦. tial automaton specification, following the linearizability definition
(Sidebar 2). In Figure 2, an example of an execution for three pro-
Specifying lattice agreement. The lattice agreement abstraction cesses, 𝑝 1, 𝑝 2 and 𝑝 3 , is represented, where one can see that operation
is presented in [26] in the style that has been traditionally used in calls (from invocation to response) overlap in time. Linearizability
distributed computing: a list of requirements that operations must requires to find a linearization point in between each invocation and
satisfy. An operation 𝑝𝑟𝑜𝑝𝑜𝑠𝑒 (𝑥) invoked by process 𝑝 with input response, so that these points define a valid sequential execution of
𝑥 ∈ 𝐿, returns a value 𝑣 ′ ∈ 𝐿, such that: the lattice agreement automaton. In the example of Figure 2, the lin-
Validity. If a 𝑝𝑟𝑜𝑝𝑜𝑠𝑒 (𝑣) operation returns 𝑣 ′ then 𝑣 ′ is the join earization points do not produce a valid sequential execution because
of some proposed values including 𝑣 and all values returned the operation call by 𝑝 1 cannot possibly return the value 2, which has
by previous operations. not yet been proposed. Furthermore, from a linearizability point of
Consistency. The values returned are totally ordered by ⊑. view, the execution is incorrect, because there are no linearization
There is additionally a progress requirement, that will not be central points that satisfy the automaton specification: any way we order the
here; a common one is that if a process invokes a propose operation operations by 𝑝 1 and 𝑝 2 would give an incorrect response for one
and does not fail then the operation eventually returns (see Sidebar 3). of them. The same problem occurs with respect to any sequential
specification of lattice agreement.1
This specification is not very formal. The idea of using this style But what is wrong with the execution of Figure 2? It certainly
of specification started with the goal of describing objects, by which 1 However, one can define sequential specifications where the automaton somehow “pre-
it is usually meant a sequential specification (Sidebar 1). Consider dicts” the values that will be proposed in the future. The execution in Figure 2 is indeed
an automaton defining our lattice agreement example as a sequential linearizable with respect to these arguably unreasonable sequential specifications.

2 SPECIFYING AND IMPLEMENTING A CONCURRENT OBJECT


4

propose({1}) ! {1, 2}
<latexit sha1_base64="u6mwcJeH3LejLnjNPn+uUNhoCT8=">AAACEXicbVDLSsNAFJ3UV62vqEs3g0WoICUp9bEsuHFZwT6gCWUynbRDJ5lhZqKUkF9w46+4caGIW3fu/BunbRbaeuDC4Zx7ufeeQDCqtON8W4WV1bX1jeJmaWt7Z3fP3j9oK55ITFqYMy67AVKE0Zi0NNWMdIUkKAoY6QTj66nfuSdSUR7f6YkgfoSGMQ0pRtpIfbuSeiqEQnLBFckqXup62WnJk3Q40khK/gCNdFbzsr5ddqrODHCZuDkpgxzNvv3lDThOIhJrzJBSPdcR2k+R1BQzkpW8RBGB8BgNSc/QGEVE+ensowyeGGUAQy5NxRrO1N8TKYqUmkSB6YyQHqlFbyr+5/USHV75KY1FokmM54vChEHN4TQeOKCSYM0mhiAsqbkV4hGSCGsTYsmE4C6+vEzatap7Ua3f1suN8zyOIjgCx6ACXHAJGuAGNEELYPAInsEreLOerBfr3fqYtxasfOYQ/IH1+QOPcpzM</latexit>

<latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

p1
Finally, in 1994, Neiger [30] came up with an idea that was largely
overlooked in the literature. So overlooked, that some twenty years
propose({2}) ! {1, 2}
<latexit sha1_base64="hJnf3GQqafKC0I3IZekUEs/3b00=">AAACEXicbVDLSsNAFJ3UV62vqEs3g0WoICUp9bEsuHFZwT6gCWUynbRDJ5lhZqKUkF9w46+4caGIW3fu/BunbRbaeuDC4Zx7ufeeQDCqtON8W4WV1bX1jeJmaWt7Z3fP3j9oK55ITFqYMy67AVKE0Zi0NNWMdIUkKAoY6QTj66nfuSdSUR7f6YkgfoSGMQ0pRtpIfbuSeiqEQnLBFckqXlrzstOSJ+lwpJGU/AF6qXtmxL5ddqrODHCZuDkpgxzNvv3lDThOIhJrzJBSPdcR2k+R1BQzkpW8RBGB8BgNSc/QGEVE+ensowyeGGUAQy5NxRrO1N8TKYqUmkSB6YyQHqlFbyr+5/USHV75KY1FokmM54vChEHN4TQeOKCSYM0mhiAsqbkV4hGSCGsTYsmE4C6+vEzatap7Ua3f1suN8zyOIjgCx6ACXHAJGuAGNEELYPAInsEreLOerBfr3fqYtxasfOYQ/IH1+QORDZzN</latexit>

later Hemed, Rinetzky and Vafeiadis [20] independently rediscovered


<latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

p2
it – the idea of a set-sequential specification (called concurrency-
propose({3}) ! {1, 2, 3}
<latexit sha1_base64="BwhyGwbfuohQ3B6rp4sS5LZY47Q=">AAACE3icbVDLSsNAFJ34rPEVdelmsAhVSklqfSwLblxWsA9oQplMJ+3QSSbMTJQS8g9u/BU3LhRx68adf+O0zUJbD1w4nHMv997jx4xKZdvfxtLyyuraemHD3Nza3tm19vZbkicCkybmjIuOjyRhNCJNRRUjnVgQFPqMtP3R9cRv3xMhKY/u1DgmXogGEQ0oRkpLPes0dWUAY8FjLklWctMzNzsxXUEHQ4WE4A/QTZ1ytazlnlW0K/YUcJE4OSmCHI2e9eX2OU5CEinMkJRdx46VlyKhKGYkM91EkhjhERqQrqYRCon00ulPGTzWSh8GXOiKFJyqvydSFEo5Dn3dGSI1lPPeRPzP6yYquPJSGsWJIhGeLQoSBhWHk4BgnwqCFRtrgrCg+laIh0ggrHSMpg7BmX95kbSqFeeiUrutFevneRwFcAiOQAk44BLUwQ1ogCbA4BE8g1fwZjwZL8a78TFrXTLymQPwB8bnD4m9nUE=</latexit>

aware specification in [20]). The transitions of the automaton specifi-


<latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

p3 cation are labeled with sets of operation invocations, each one together
with its response, as in Figure 4.
linearization points
<latexit sha1_base64="/LFNtgHcwAzmv9aCpEEoZgeKbE4=">AAACBHicbVDLSgMxFM3UV62vUZfdBIvgqsxoUZcFNy4r2Ae0pWTSTBuax5BkxDp04cZfceNCEbd+hDv/xsx0Ftp6IHA45x5u7gkiRrXxvG+nsLK6tr5R3Cxtbe/s7rn7By0tY4VJE0smVSdAmjAqSNNQw0gnUgTxgJF2MLlK/fYdUZpKcWumEelzNBI0pBgZKw3cco8H8j5J40jRh0yFkaTC6NnArXhVLwNcJn5OKiBHY+B+9YYSx5wIgxnSuut7keknSBmKGZmVerEmEcITNCJdSwXiRPeT7IgZPLbKEIZS2ScMzNTfiQRxrac8sJMcmbFe9FLxP68bm/Cyn1ARxYYIPF8UxgwaCdNG4JAqgg2bWoKwovavEI+RQtjY3kq2BH/x5GXSOq3659Wzm1qlXsvrKIIyOAInwAcXoA6uQQM0AQaP4Bm8gjfnyXlx3p2P+WjByTOH4A+czx8JuJjz</latexit>

time
<latexit sha1_base64="LAOHq3w8/ZG+PlaVZQANqXQOIk4=">AAAB8nicbVDLSgNBEJyNrxhfUY9eBoPgKexqUI8BLx4jmAckS5idzCZD5rHM9IphyWd48aCIV7/Gm3/jJNmDJhY0FFXddHdFieAWfP/bK6ytb2xuFbdLO7t7+wflw6OW1amhrEm10KYTEcsEV6wJHATrJIYRGQnWjsa3M7/9yIzlWj3AJGGhJEPFY04JOKnbk5F+yoBLNu2XK37VnwOvkiAnFZSj0S9/9QaappIpoIJY2w38BMKMGOBUsGmpl1qWEDomQ9Z1VBHJbJjNT57iM6cMcKyNKwV4rv6eyIi0diIj1ykJjOyyNxP/87opxDdhxlWSAlN0sShOBQaNZ//jATeMgpg4Qqjh7lZMR8QQCi6lkgshWH55lbQuqsFV9fK+VqnX8jiK6ASdonMUoGtUR3eogZqIIo2e0St688B78d69j0VrwctnjtEfeJ8/3+SRmw==</latexit>

<latexit sha1_base64="UvYZ3jzO89UHO/NPmBDVdJtSdpk=">AAACu3icbVFdT9swFHUCG6yw0W2Pe7laBeqkrkpStPEyCYkXHpm0AlJTKse9aQ2ObdnOpCrLn+Rt/2ZOySSgXMvy8Tn3w/c604JbF0V/g3Br+9Xrnd03nb39t+8Ouu8/XFpVGoZjpoQy1xm1KLjEseNO4LU2SItM4FV2d9boV7/RWK7kL7fSOC3oQvKcM+o8NevepxkuuKyoMXRVV+xPs+oOpKzJWCVfR/6iZzEcQZUOavAntAggTZ866lnS6P/lR3jTc9Rmsjloo7SyWPchrWCU1l/aKn4ZtLruezoeJIMReO3FwinKedtCpzPr9qJhtDbYBHELeqS1i2YIc8XKAqVjglo7iSPtpj6f40ygz15a1JTd0QVOPJS0QDut1rOv4dAzc8iV8Vs6WLOPIypaWLsqMu9ZULe0z7WGfEmblC4/mVZc6tKhZA+F8lKAU9B8JMy5QebEygPKDPdvBbakhjLnv7sZQvy85U1wmQzjb8Pjn8e906Qdxy75RD6TPonJd3JKzskFGRMWnAQ3wSJYhj9CFt6G4sE1DNqYj+SJheU/Mu7J8Q==</latexit>

s0 p1
p2
<latexit sha1_base64="QKQhSdVmoeDj3a1Aael9tG02qYI=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN4KXjxWNLbQhrLZTtqlm03Y3Qgl9Cd48aDi1X/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8M/PbT6g0T+SDmaQYxHQoecQZNVa61323X625dXcOskq8gtSgQKtf/eoNEpbFKA0TVOuu56YmyKkynAmcVnqZxpSyMR1i11JJY9RBPj91Ss6sMiBRomxJQ+bq74mcxlpP4tB2xtSM9LI3E//zupmJroKcyzQzKNliUZQJYhIy+5sMuEJmxMQSyhS3txI2oooyY9Op2BC85ZdXid+oX9e9u4tas1GkUYYTOIVz8OASmnALLfCBwRCe4RXeHOG8OO/Ox6K15BQzx/AHzucPa4mNXg==</latexit>

Figure 2: Example of a non-linearizable lattice agreement execu- p3 propose({3}) resp({1, 2, 3})


tion. <latexit sha1_base64="XjOublXH6UwVUrnhu5dW/VA8XSY=">AAAC3HicbVLLbtQwFHXCqwyPTmHJ5ooRqEhllKRVYVmJDcsiMW3ReIgcz83UamJbtoM0ChEbFiDElg9jx3fwAziTgJhOr2Xp+J5zj68fmS6EdVH0KwivXb9x89bW7cGdu/fubw93HpxYVRmOE64KZc4yZrEQEidOuALPtEFWZgWeZhevWv70AxorlHzrlhpnJVtIkQvOnE+lw980w4WQNTOGLZuaf2xHMwDKW8c6eb7vFzqN4SnU1OagjdLKYrNLa4hpA8/AM9BxBq3uiL2ko4DSdSudJq1+wyqBvgD+0etuK27Tbb+T7zV/22jRFUqKct6fcTBIh6NoHK0CNkHcgxHp4zgd/qRzxasSpeMFs3YaR9rNvJ8TvEDvXlnUjF+wBU49lKxEO6tXj9PAE5+ZQ66Mn9LBKvt/Rc1Ka5dl5pUlc+f2Mtcmr+KmlctfzmohdeVQ8m6jvCrAKWhfGubCIHfF0gPGjfC9Aj9nhnHn/0N7CfHlI2+Ck2QcH44P3hyMjpL+OrbII/KY7JKYvCBH5DU5JhPCg3fBp+BL8DV8H34Ov4XfO2kY9DUPyVqEP/4A1gnWlg==</latexit>

p1 propose({1}) resp({1, 2}) <latexit sha1_base64="8IOk6o2JNz2Uxxsuxse6DGq6bys=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0oPtev1xxq+4cZJV4OalAjka//NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQllYzrErqWSRqj9bH7qlJxZZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMT3vgZl0lqULLFojAVxMRk9jcZcIXMiIkllClubyVsRBVlxqZTsiF4yy+vklat6l1VL+4vK/VaHkcRTuAUzsGDa6jDHTSgCQyG8Ayv8OYI58V5dz4WrQUnnzmGP3A+fwAB3o2T</latexit>

s1
<latexit sha1_base64="H/UZtCDwIVg5ejb+wHJzlw1y3oA=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2m3bpZhN2J0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6xEnC/YgOlQgFo2ilB9Ov9csVt+rOQVaJl5MK5Gj0y1+9QczSiCtkkhrT9dwE/YxqFEzyaamXGp5QNqZD3rVU0YgbP5ufOiVnVhmQMNa2FJK5+nsio5ExkyiwnRHFkVn2ZuJ/XjfF8MbPhEpS5IotFoWpJBiT2d9kIDRnKCeWUKaFvZWwEdWUoU2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMMf+B8/gADYo2U</latexit>

s2
p2 propose({2}) resp({1, 2})
p3
satisfies the lattice agreement consistency requirement stated above.
The problem is the validity requirement: the execution would violate
it with respect to any sequential specification of lattice agreement. Figure 4: Part of a set-sequential automaton.
Validity seems to assume a priori that no operations are invoked con-
currently. But the whole point of the lattice agreement state machine The corresponding correctness condition is set-linearizability. Its
replication idea was to avoid using consensus to order operations! aim is to allow for the simultaneity of some operations: one can put
There are several lattice agreement implementations (e.g. [26, linearization points grouping together several operation at the same
40]). For illustration, consider the simple one-shot lattice agreement moment of time in a set. Figure 5 shows a set-linearization of the
implementation using read/write primitives on a shared memory in execution we have been considering. It can be tested against the set-
Figure 3 (adapted from [8]); one-shot means that each process invokes sequential automaton illustrated in Figure 4 to show that it is a correct
only once the 𝑝𝑟𝑜𝑝𝑜𝑠𝑒 operation2 . In the algorithm, each process first execution, i.e. set-linearizable.
writes its proposal in a dedicated entry of the shared memory (Line 1), propose({1}) ! {1, 2}
<latexit sha1_base64="u6mwcJeH3LejLnjNPn+uUNhoCT8=">AAACEXicbVDLSsNAFJ3UV62vqEs3g0WoICUp9bEsuHFZwT6gCWUynbRDJ5lhZqKUkF9w46+4caGIW3fu/BunbRbaeuDC4Zx7ufeeQDCqtON8W4WV1bX1jeJmaWt7Z3fP3j9oK55ITFqYMy67AVKE0Zi0NNWMdIUkKAoY6QTj66nfuSdSUR7f6YkgfoSGMQ0pRtpIfbuSeiqEQnLBFckqXup62WnJk3Q40khK/gCNdFbzsr5ddqrODHCZuDkpgxzNvv3lDThOIhJrzJBSPdcR2k+R1BQzkpW8RBGB8BgNSc/QGEVE+ensowyeGGUAQy5NxRrO1N8TKYqUmkSB6YyQHqlFbyr+5/USHV75KY1FokmM54vChEHN4TQeOKCSYM0mhiAsqbkV4hGSCGsTYsmE4C6+vEzatap7Ua3f1suN8zyOIjgCx6ACXHAJGuAGNEELYPAInsEreLOerBfr3fqYtxasfOYQ/IH1+QOPcpzM</latexit>

p1
and then repeatedly reads the whole memory and computes the join
<latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

of the proposals that have been written so far, until it sees no new propose({2}) ! {1, 2}
<latexit sha1_base64="hJnf3GQqafKC0I3IZekUEs/3b00=">AAACEXicbVDLSsNAFJ3UV62vqEs3g0WoICUp9bEsuHFZwT6gCWUynbRDJ5lhZqKUkF9w46+4caGIW3fu/BunbRbaeuDC4Zx7ufeeQDCqtON8W4WV1bX1jeJmaWt7Z3fP3j9oK55ITFqYMy67AVKE0Zi0NNWMdIUkKAoY6QTj66nfuSdSUR7f6YkgfoSGMQ0pRtpIfbuSeiqEQnLBFckqXlrzstOSJ+lwpJGU/AF6qXtmxL5ddqrODHCZuDkpgxzNvv3lDThOIhJrzJBSPdcR2k+R1BQzkpW8RBGB8BgNSc/QGEVE+ensowyeGGUAQy5NxRrO1N8TKYqUmkSB6YyQHqlFbyr+5/USHV75KY1FokmM54vChEHN4TQeOKCSYM0mhiAsqbkV4hGSCGsTYsmE4C6+vEzatap7Ua3f1suN8zyOIjgCx6ACXHAJGuAGNEELYPAInsEreLOerBfr3fqYtxasfOYQ/IH1+QORDZzN</latexit>

relevant proposal (Lines 3 to 7). The execution of Figure 2 can be <latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

p2
produced by this algorithm, if 𝑝 1 and 𝑝 2 write their proposals (in
propose({3}) ! {1, 2, 3}
<latexit sha1_base64="BwhyGwbfuohQ3B6rp4sS5LZY47Q=">AAACE3icbVDLSsNAFJ34rPEVdelmsAhVSklqfSwLblxWsA9oQplMJ+3QSSbMTJQS8g9u/BU3LhRx68adf+O0zUJbD1w4nHMv997jx4xKZdvfxtLyyuraemHD3Nza3tm19vZbkicCkybmjIuOjyRhNCJNRRUjnVgQFPqMtP3R9cRv3xMhKY/u1DgmXogGEQ0oRkpLPes0dWUAY8FjLklWctMzNzsxXUEHQ4WE4A/QTZ1ytazlnlW0K/YUcJE4OSmCHI2e9eX2OU5CEinMkJRdx46VlyKhKGYkM91EkhjhERqQrqYRCon00ulPGTzWSh8GXOiKFJyqvydSFEo5Dn3dGSI1lPPeRPzP6yYquPJSGsWJIhGeLQoSBhWHk4BgnwqCFRtrgrCg+laIh0ggrHSMpg7BmX95kbSqFeeiUrutFevneRwFcAiOQAk44BLUwQ1ogCbA4BE8g1fwZjwZL8a78TFrXTLymQPwB8bnD4m9nUE=</latexit>

Line 1, in any order), and then both execute the loop of Line 4 twice, <latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

p3
both returning {1, 2}, before 𝑝 3 starts executing its code.
linearization points

Shared variables:
𝑀 [1, . . . , 𝑛] : array of integers initialized to [ ∅, . . . , ∅ ]
Figure 5: A set-linearizable lattice agreement execution.
operation propose(𝑣𝑖 ) is
(01) write(𝑀 [𝑖 ], 𝑣𝑖 )
(02) 𝑜𝑙𝑑𝑖 , 𝑛𝑒𝑤𝑖 ← ∅ But now let us consider the execution of Figure 6. Again, it seems
(03) for each 1 ≤ 𝑗 ≤ 𝑛 do 𝑛𝑒𝑤𝑖 ← 𝑛𝑒𝑤𝑖 ∪ read(𝑀 [ 𝑗 ] ) end for to satisfy the consistency requirement of lattice agreement, and it
(04) repeat seems correct with respect to an intuitive interpretation of the validity
(05) 𝑜𝑙𝑑𝑖 ← 𝑛𝑒𝑤𝑖
(06) for each 1 ≤ 𝑗 ≤ 𝑛 do 𝑛𝑒𝑤𝑖 ← 𝑛𝑒𝑤𝑖 ∪ read(𝑀 [ 𝑗 ] ) end for requirement. Furthermore, again there is an algorithm that can produce
(07) until (𝑜𝑙𝑑𝑖 = 𝑛𝑒𝑤𝑖 ) end repeat it, namely the one in Figure 3. As before, 𝑝 1 and 𝑝 2 execute their
(08) return(𝑛𝑒𝑤𝑖 )
end operation
write operations (Line 1) concurrently, but now 𝑝 1 executes alone
the loop of Line 4 twice, while 𝑝 2 is delayed, then 𝑝 3 executes its
write operation, and finally both 𝑝 2 and 𝑝 3 execute the loop of Line 4
Figure 3: A one-shot lattice agreement implementation based on twice.
read/write primitives (code of process 𝑝𝑖 ). However, the execution of Figure 6 is not only not linearizable
but also is not set-linearizable. The reason is that the operations by
𝑝 1 and 𝑝 3 are not concurrent, and hence they cannot be set-linearized
From sequential objects to truly concurrent objects. What is lattice
together. But the operation of 𝑝 2 must be set-linearized with both
agreement, given that it has no sequential specification? What problem
because its proposed value has been returned by the operation of 𝑝 1 ,
is the algorithm of Figure 3 solving? One encounters publications
and it has returned the value proposed by the operation of 𝑝 3 . Namely,
with similar situations: a list of requirements are used to specify a
the operation of 𝑝 2 could not have taken effect at a single point of
problem with no sequential specification, and for the lack of a name
time.
to such an entity, researchers have either used different ad hoc names
This type of example motivated us to propose in [8] one fur-
such as “abstraction" or “problem" or “type", or simply called it an
ther generalization of linearizability, interval-linearizability. The cor-
“object with no sequential specification", without further explanation
responding generalization of a set-sequential object is an interval-
of what this might be.
sequential object. It is defined in terms of an automaton whose tran-
2 One-shot lattice agreement is equivalent to one-shot atomic snapshot [3]. sitions are labeled with sets of operation invocations, but each such
2 SPECIFYING AND IMPLEMENTING A CONCURRENT OBJECT
5

propose({1}) ! {1, 2}
<latexit sha1_base64="u6mwcJeH3LejLnjNPn+uUNhoCT8=">AAACEXicbVDLSsNAFJ3UV62vqEs3g0WoICUp9bEsuHFZwT6gCWUynbRDJ5lhZqKUkF9w46+4caGIW3fu/BunbRbaeuDC4Zx7ufeeQDCqtON8W4WV1bX1jeJmaWt7Z3fP3j9oK55ITFqYMy67AVKE0Zi0NNWMdIUkKAoY6QTj66nfuSdSUR7f6YkgfoSGMQ0pRtpIfbuSeiqEQnLBFckqXup62WnJk3Q40khK/gCNdFbzsr5ddqrODHCZuDkpgxzNvv3lDThOIhJrzJBSPdcR2k+R1BQzkpW8RBGB8BgNSc/QGEVE+ensowyeGGUAQy5NxRrO1N8TKYqUmkSB6YyQHqlFbyr+5/USHV75KY1FokmM54vChEHN4TQeOKCSYM0mhiAsqbkV4hGSCGsTYsmE4C6+vEzatap7Ua3f1suN8zyOIjgCx6ACXHAJGuAGNEELYPAInsEreLOerBfr3fqYtxasfOYQ/IH1+QOPcpzM</latexit>

<latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

p1
corresponding correctness notion, set-linearizability, allows several
operation calls to be linearized at the same linearization point, namely,
propose({2}) ! {1, 2, 3}
<latexit sha1_base64="G+FTPgNdYzhvYIbR5vNyNhAYYek=">AAACE3icbVDLSsNAFJ34rPEVdelmsAhVSklqfSwLblxWsA9oQplMJ+3QSSbMTJQS8g9u/BU3LhRx68adf+O0zUJbD1w4nHMv997jx4xKZdvfxtLyyuraemHD3Nza3tm19vZbkicCkybmjIuOjyRhNCJNRRUjnVgQFPqMtP3R9cRv3xMhKY/u1DgmXogGEQ0oRkpLPes0dWUAY8FjLklWctOqm52YrqCDoUJC8Afopk65Wj5zs55VtCv2FHCRODkpghyNnvXl9jlOQhIpzJCUXceOlZcioShmJDPdRJIY4REakK6mEQqJ9NLpTxk81kofBlzoihScqr8nUhRKOQ593RkiNZTz3kT8z+smKrjyUhrFiSIRni0KEgYVh5OAYJ8KghUba4KwoPpWiIdIIKx0jKYOwZl/eZG0qhXnolK7rRXr53kcBXAIjkAJOOAS1MENaIAmwOARPINX8GY8GS/Gu/Exa10y8pkD8AfG5w+IIJ1A</latexit>

all the operations belong to the same concurrency class.


<latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

p2
Observe that when each concurrency class consists of a single oper-
propose({3}) ! {1, 2, 3}
<latexit sha1_base64="BwhyGwbfuohQ3B6rp4sS5LZY47Q=">AAACE3icbVDLSsNAFJ34rPEVdelmsAhVSklqfSwLblxWsA9oQplMJ+3QSSbMTJQS8g9u/BU3LhRx68adf+O0zUJbD1w4nHMv997jx4xKZdvfxtLyyuraemHD3Nza3tm19vZbkicCkybmjIuOjyRhNCJNRRUjnVgQFPqMtP3R9cRv3xMhKY/u1DgmXogGEQ0oRkpLPes0dWUAY8FjLklWctMzNzsxXUEHQ4WE4A/QTZ1ytazlnlW0K/YUcJE4OSmCHI2e9eX2OU5CEinMkJRdx46VlyKhKGYkM91EkhjhERqQrqYRCon00ulPGTzWSh8GXOiKFJyqvydSFEo5Dn3dGSI1lPPeRPzP6yYquPJSGsWJIhGeLQoSBhWHk4BgnwqCFRtrgrCg+laIh0ggrHSMpg7BmX95kbSqFeeiUrutFevneRwFcAiOQAk44BLUwQ1ogCbA4BE8g1fwZjwZL8a78TFrXTLymQPwB8bnD4m9nUE=</latexit>

ation, set-linearizability boils down to linearizability (recall Figure 1).


<latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

p3 Moreover, the containment is strict, since there are set-sequential


objects with no sequential specification, such as the two different
linearization points
set-sequential objects described next.
The exchanger object. The Java documentation provides the fol-
Figure 6: An interval-linearizable execution of lattice agreement
lowing specification:
that is not set-linearizable.
A synchronization point at which threads can pair
and swap elements within pairs. Each thread
invocation is not necessarily matched with a response; the response presents some object on entry to the exchange
can appear later on in another transition. The interval of each op- method, matches with a partner thread, and re-
eration is now marked with either one linearization point (in case ceives its partner’s object on return.
it appears to be executed instantaneously) or with two linearization Clearly there is no sequential specification of an exchanger. Such
points (in case it overlaps with at least two other non-overlapping a specification is outside the domain of linearizability simply because
operations). An example of part of such an automaton for lattice linearizability rules out concurrency. An exchanger however can be
agreement is presented in Figure 7. This automaton validates the exe- specified as a set-sequential object whose set-sequential executions
cution of Figure 6. Notice that the operation of 𝑝 2 is invoked in the contain a concurrency class for each pair of operation calls exchanging
first transition, and its corresponding response appears in the second elements, and a concurrency class for every operation call that is not
transition, concurrently with the operation of 𝑝 3 (a single point). able to exchange its element. Figure 8 depicts an example of a set-
Hence, interval-linearizability extends set-linearizability by allow- linearizable execution of a concurrent exchanger implementation.
ing time-ubiquity of operations: an operation can appear as being
executed concurrently with several consecutive, non-overlapping op- exchange(a) ! ? exchange(d) ! e
<latexit sha1_base64="4Xf113XEaa2TQeaowqg2Eg7mwAg=">AAACC3icbVA9SwNBEN3zM8avqKXNYhC0CXd+lwEbywhGA7kQ9jZzyeLe7rE7p4YjvY1/xcZCEVv/gJ3/xk1MocYHA4/3ZpiZF6VSWPT9T29qemZ2br6wUFxcWl5ZLa2tX1qdGQ51rqU2jYhZkEJBHQVKaKQGWBJJuIquT4f+1Q0YK7S6wH4KrYR1lYgFZ+ikdmkrD21M4Y73mOrCYIftFkMjuj1kxuhbGkYa26WyX/FHoJMkGJMyGaPWLn2EHc2zBBRyyaxtBn6KrZwZFFzCoBhmFlLGr1kXmo4qloBt5aNfBnTbKR0aa+NKIR2pPydylljbTyLXmTDs2b/eUPzPa2YYn7RyodIMQfHvRXEmKWo6DIZ2hAGOsu8I40a4W6kLxTCOLr6iCyH4+/IkudyrBEeV/fODcvVwHEeBbJItskMCckyq5IzUSJ1wck8eyTN58R68J+/Ve/tunfLGMxvkF7z3L34vmqw=</latexit> <latexit sha1_base64="gQaltRE6xdwEUMbPVJdi/Idskxo=">AAACCHicbVC5TsNAEF2HK5jLQEnBiggpNJHNXUaioQwSOaQ4itabcbLKem3troHISknDr9BQgBAtn0DH37A5Ckh40khP781oZl6QcKa0635buYXFpeWV/Kq9tr6xueVs79RUnEoKVRrzWDYCooAzAVXNNIdGIoFEAYd60L8a+fU7kIrF4lYPEmhFpCtYyCjRRmo7+5mvQgwPtEdEF4bFzpHtS9btaSJlfI+h7RTckjsGnifelBTQFJW28+V3YppGIDTlRKmm5ya6lRGpGeUwtP1UQUJon3ShaaggEahWNn5kiA+N0sFhLE0Jjcfq74mMREoNosB0RkT31Kw3Ev/zmqkOL1sZE0mqQdDJojDlWMd4lAruMAlU84EhhEpmbsUmEUmoNtnZJgRv9uV5Ujsueeelk5vTQvlsGkce7aEDVEQeukBldI0qqIooekTP6BW9WU/Wi/VufUxac9Z0Zhf9gfX5AxCdmVU=</latexit>

erations. <latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

p1
<latexit sha1_base64="9qBvCBn68Kl+03f20S4NRbE2L78=">AAACznicbVJNj9MwEHXC11I+tsCRy4gK1EqlStIVcFyJCxKXItHdleoqctxJ11rHMbazUgkRV34fN34A/wOnzUrsbsey/Dxv5nnscaalsC6K/gThnbv37j84eNh79PjJ08P+s+cntqwMxzkvZWnOMmZRCoVzJ5zEM22QFZnE0+ziY8ufXqKxolRf3UbjsmBrJXLBmfOutP+XZrgWqmbGsE1T8x/taHpAeatYJ2+nfqPTGN5ATccN+BU6BEDp9UCdJi1/Re9iwaDVzZDWEI+T8RRoA6P9ydNO3OagTalLi80QfN6UNqMrsT1yo721UFSr7la9XtofRJNoa3AbxB0YkM5maf83XZW8KlA5Lpm1izjSbun1nOASvXplUTN+wda48FCxAu2y3rajgdfes4K8NH4qB1vv/xk1K6zdFJmPLJg7tze51rmPW1Qu/7CshdKVQ8V3B+WVBFdC21tYCYPcyY0HjBvhawV+zgzjzv+A9hHim1e+DU6SSfxucvTlaHCcdM9xQF6SV2RIYvKeHJNPZEbmhAefg2/B96AOZ+Fl2IQ/d6Fh0OW8INcs/PUP9ATPkA==</latexit>

s0 p1
exchange(c) ! b exchange(e) ! d
<latexit sha1_base64="WmPCZzDuyCvoE/SE3kL8/m9qfaQ=">AAACCHicbVC5TsNAEF2HK5jLQEnBiggpNJHNXUaioQwSOaQ4itabcbLKem3troHISknDr9BQgBAtn0DH37A5Ckh40khP781oZl6QcKa0635buYXFpeWV/Kq9tr6xueVs79RUnEoKVRrzWDYCooAzAVXNNIdGIoFEAYd60L8a+fU7kIrF4lYPEmhFpCtYyCjRRmo7+5mvQgwPtEdEF4ZFemT7knV7mkgZ3+Og7RTckjsGnifelBTQFJW28+V3YppGIDTlRKmm5ya6lRGpGeUwtP1UQUJon3ShaaggEahWNn5kiA+N0sFhLE0Jjcfq74mMREoNosB0RkT31Kw3Ev/zmqkOL1sZE0mqQdDJojDlWMd4lAruMAlU84EhhEpmbsUmEUmoNtnZJgRv9uV5Ujsueeelk5vTQvlsGkce7aEDVEQeukBldI0qqIooekTP6BW9WU/Wi/VufUxac9Z0Zhf9gfX5Awp+mVE=</latexit> <latexit sha1_base64="/S37rhkOznHmQ1Odeobtjxrssi4=">AAACCHicbVC5TsNAEF2HK5jLQEnBiggpNJHNXUaioQwSOaQ4itabcbLKem3troHISknDr9BQgBAtn0DH37A5Ckh40khP781oZl6QcKa0635buYXFpeWV/Kq9tr6xueVs79RUnEoKVRrzWDYCooAzAVXNNIdGIoFEAYd60L8a+fU7kIrF4lYPEmhFpCtYyCjRRmo7+5mvQgwPtEdEF4ZFOLJ9ybo9TaSM73Gn7RTckjsGnifelBTQFJW28+V3YppGIDTlRKmm5ya6lRGpGeUwtP1UQUJon3ShaaggEahWNn5kiA+N0sFhLE0Jjcfq74mMREoNosB0RkT31Kw3Ev/zmqkOL1sZE0mqQdDJojDlWMd4lAruMAlU84EhhEpmbsUmEUmoNtnZJgRv9uV5Ujsueeelk5vTQvlsGkce7aEDVEQeukBldI0qqIooekTP6BW9WU/Wi/VufUxac9Z0Zhf9gfX5AxCsmVU=</latexit>

p2 resp({1, 2, 3})
<latexit sha1_base64="QKQhSdVmoeDj3a1Aael9tG02qYI=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoN4KXjxWNLbQhrLZTtqlm03Y3Qgl9Cd48aDi1X/kzX/jts1BWx8MPN6bYWZemAqujet+O6W19Y3NrfJ2ZWd3b/+genj0qJNMMfRZIhLVCalGwSX6hhuBnVQhjUOB7XB8M/PbT6g0T+SDmaQYxHQoecQZNVa61323X625dXcOskq8gtSgQKtf/eoNEpbFKA0TVOuu56YmyKkynAmcVnqZxpSyMR1i11JJY9RBPj91Ss6sMiBRomxJQ+bq74mcxlpP4tB2xtSM9LI3E//zupmJroKcyzQzKNliUZQJYhIy+5sMuEJmxMQSyhS3txI2oooyY9Op2BC85ZdXid+oX9e9u4tas1GkUYYTOIVz8OASmnALLfCBwRCe4RXeHOG8OO/Ox6K15BQzx/AHzucPa4mNXg==</latexit>

p3 propose({3}) resp({1, 2, 3})


<latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

p2
<latexit sha1_base64="BoCDD3sK5/Ac4i+M4u3qHLw67PE=">AAACzHicbVFNb9QwEHVSPsrytS1HLiNWoCKVVZJFhWMlLnBBRWLbSptV5HgnW6uObdkOIgq59gf2xpk/grOJBO12LEvP8968sT25Fty6KPodhDv37j94uPto9PjJ02fPx3v7p1ZVhuGcKaHMeU4tCi5x7rgTeK4N0jIXeJZffur4sx9oLFfyu6s1Lku6lrzgjDqfysZ/0hzXXDbUGFq3DfvVrXYEKescm+TdzB90FsMbaFJbgDZKK4vtQdpAnLbwFjwDPWfQ6p44THoK0vSmlc6STr9llcBQAD192MKd1bN/dN+2F24pU5Sr4U2jUTaeRNNoE7AN4gFMyBAn2fg6XSlWlSgdE9TaRRxpt/R+jjOB3r2yqCm7pGtceChpiXbZbIbRwmufWUGhjN/SwSb7f0VDS2vrMvfKkroLe5vrkndxi8oVH5cNl7pyKFnfqKgEOAXdZGHFDTInag8oM9zfFdgFNZQ5P//uE+LbT94Gp8k0PprOvr2fHCfDd+ySl+QVOSAx+UCOyWdyQuaEBV8CFfwM6vBr6MImbHtpGAw1L8iNCK/+Atiu0L4=</latexit>

p1 propose({1}) resp({1, 2})


exchange(f ) ! ?
<latexit sha1_base64="B0gkNeaKAiaVaV8abE6Z2PjP1Bc=">AAACC3icbVA9SwNBEN3zM8avqKXNYhC0CXd+lwEbywhGA7kQ9jZzyeLe7rE7p4YjvY1/xcZCEVv/gJ3/xk1MocYHA4/3ZpiZF6VSWPT9T29qemZ2br6wUFxcWl5ZLa2tX1qdGQ51rqU2jYhZkEJBHQVKaKQGWBJJuIquT4f+1Q0YK7S6wH4KrYR1lYgFZ+ikdmkrD21M4Y73mOrCYCfeLYZGdHvIjNG3NIw0tktlv+KPQCdJMCZlMkatXfoIO5pnCSjkklnbDPwUWzkzKLiEQTHMLKSMX7MuNB1VLAHbyke/DOi2Uzo01saVQjpSf07kLLG2n0SuM2HYs3+9ofif18wwPmnlQqUZguLfi+JMUtR0GAztCAMcZd8Rxo1wt1IXimEcXXxFF0Lw9+VJcrlXCY4q++cH5erhOI4C2SRbZIcE5JhUyRmpkTrh5J48kmfy4j14T96r9/bdOuWNZzbIL3jvX4YdmrE=</latexit>

s1 s2 exchange(b) ! c
<latexit sha1_base64="a+6ogyrRWMejuAoBua7sIBw8JLg=">AAACCHicbVC5TsNAEF2HK5jLQEnBiggpNJHNXUaioQwSOaQ4itabcbLKem3troHISknDr9BQgBAtn0DH37A5Ckh40khP781oZl6QcKa0635buYXFpeWV/Kq9tr6xueVs79RUnEoKVRrzWDYCooAzAVXNNIdGIoFEAYd60L8a+fU7kIrF4lYPEmhFpCtYyCjRRmo7+5mvQgwPtEdEF4bF4Mj2Jev2NJEyvse07RTckjsGnifelBTQFJW28+V3YppGIDTlRKmm5ya6lRGpGeUwtP1UQUJon3ShaaggEahWNn5kiA+N0sFhLE0Jjcfq74mMREoNosB0RkT31Kw3Ev/zmqkOL1sZE0mqQdDJojDlWMd4lAruMAlU84EhhEpmbsUmEUmoNtnZJgRv9uV5Ujsueeelk5vTQvlsGkce7aEDVEQeukBldI0qqIooekTP6BW9WU/Wi/VufUxac9Z0Zhf9gfX5AwpvmVE=</latexit>

<latexit sha1_base64="H/UZtCDwIVg5ejb+wHJzlw1y3oA=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2m3bpZhN2J0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6xEnC/YgOlQgFo2ilB9Ov9csVt+rOQVaJl5MK5Gj0y1+9QczSiCtkkhrT9dwE/YxqFEzyaamXGp5QNqZD3rVU0YgbP5ufOiVnVhmQMNa2FJK5+nsio5ExkyiwnRHFkVn2ZuJ/XjfF8MbPhEpS5IotFoWpJBiT2d9kIDRnKCeWUKaFvZWwEdWUoU2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMMf+B8/gADYo2U</latexit>

<latexit sha1_base64="8IOk6o2JNz2Uxxsuxse6DGq6bys=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0oPtev1xxq+4cZJV4OalAjka//NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQllYzrErqWSRqj9bH7qlJxZZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMT3vgZl0lqULLFojAVxMRk9jcZcIXMiIkllClubyVsRBVlxqZTsiF4yy+vklat6l1VL+4vK/VaHkcRTuAUzsGDa6jDHTSgCQyG8Ayv8OYI58V5dz4WrQUnnzmGP3A+fwAB3o2T</latexit>

p2 propose({2})
p3 <latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

p3

Figure 7: Part of an interval-sequential automaton.


linearization points
<latexit sha1_base64="yt4TwWXeCUnrmXf/zv+GzY5olP0=">AAACBHicbVC7TsMwFHXKq5RXgLGLRYXEVCW8x0osjEWiD6mNKsd1W6t+RLaDKFEHFn6FhQGEWPkINv4GJ80ALUeydHTOPbq+J4wY1cbzvp3C0vLK6lpxvbSxubW94+7uNbWMFSYNLJlU7RBpwqggDUMNI+1IEcRDRlrh+Cr1W3dEaSrFrZlEJOBoKOiAYmSs1HPLXR7K+ySNI0UfMhVGkgqjpz234lW9DHCR+DmpgBz1nvvV7UsccyIMZkjrju9FJkiQMhQzMi11Y00ihMdoSDqWCsSJDpLsiCk8tEofDqSyTxiYqb8TCeJaT3hoJzkyIz3vpeJ/Xic2g8sgoSKKDRF4tmgQM2gkTBuBfaoINmxiCcKK2r9CPEIKYWN7K9kS/PmTF0nzuOqfV09uTiu1s7yOIiiDA3AEfHABauAa1EEDYPAInsEreHOenBfn3fmYjRacPLMP/sD5/AEKBZj0</latexit>

Keeping the benefits of linearizability. We stress that the exten-


Figure 8: Example of a set-linearizable exchanger execution.
sions of linearizability to set-linearizability and interval-linearizability
are not done at the price of losing any of its three properties, as proved
in [8]. First, both are state-based specifications, which is useful for Exchangers are useful synchronization objects that have been used
documentation and correctness proofs. Second, they are composable, in a number of concurrent implementations (e.g. [23, 35, 37]), how-
can safely use several linearizable, set-linearizable or even interval- ever, the lack of a sequential specification of an exchanger makes
linearizable object implementations because their composition will correctness proofs intricate. As a concrete example, consider the
maintain the corresponding property. Third, the nonblocking property scalable and linearizable elimination-backoff stack implementation
of linearizability also is preserved (see Sidebar 3). of Hendler, Shavit and Yerushalmi [23]. Very roughly, the idea in
We proceed now to explore in more detail set-linearizability and this stack implementation is the following: whatever the state of the
then interval-linearizability, with additional examples. stack, two concurrent push(𝑥) and pop() invocations can be “elimi-
nated” if the pop() operation returns 𝑥, since the pop operation can
3 SET-LINEARIZABILITY be linearized right after push(𝑥); if an operation does not find a
As discussed in the previous section, the idea is to allow predefined concurrent operation to be eliminated with, it uses a “slower” stack
subsets of operations to be seen as occurring simultaneously; such a implementation to complete its invocation. The elimination scheme is
set of operations is called a concurrency class. Hence, set lineariz- implemented through an array of exchanger objects where operations
ability is associated with operation simultaneity. A set-sequential try to exchange elements.
object is specified by an automaton whose transitions are labeled with Having a formal specification of the exchanger object is important
concurrency classes. It defines a set of valid set-sequential execu- for developing modular verification techniques for concurrent imple-
tions, each one consisting of a sequence of concurrency classes. The mentations. The set-sequential specification of the exchanger object
3 SET-LINEARIZABILITY
6

has been exploited in [20] to obtain a modular proof of the elimination- The simple-enqueuer implementation in Figure 9 uses a shared
backoff stack. Namely, the exchanger objects used in the elimination array ITEMS where items are stored in/removed from, and two shared
scheme are independently shown to be correct (i.e. set-linearizable) integers, TAIL and HEAD, to store the current head and tail of the
and then the elimination-backoff stack is shown to be linearizable as- queue. ITEMS and TAIL are manipulated through simple read/write
suming the elimination scheme is made of set-linearizable exchanger primitives, while HEAD is manipulated by dequeuers through the
objects. Thus, the correctness of the elimination-backoff stack does max_read and max_write linearizable operations: max_read returns
not rely on any particular implementation of the elimination scheme. the maximum value written so far in HEAD and max_write writes a
new value in HEAD only if it is greater than the largest value that has
A relaxed queue. Despite of its benefits, linearizability has draw-
been written so far. Aspnes, Attiya and Censor-Hillel have proposed
backs, beyond the fact that there are inherently concurrent objects
wait-free (see Sidebar 3) linearizable implementations of max_read
without a sequential specification. There are impossibility results
and max_write that use only read/write primitives and are devoid of
for a number of concurrent objects, like sets, stacks, queues and
read-after-write synchronization patterns [2], and thus the implemen-
work-stealing, showing that any linearizable implementation must use
tation in Figure 9 possesses these properties too.
synchronization mechanisms that can be implemented only through
expensive instructions of current multicore architectures. These syn- Shared variables:
chronization mechanisms are the read-modify-write primitives, like ITEMS[1, 2, . . .] : infinite array initialized to [⊥, ⊥, . . .]
fetch&inc, swap and compare&swap, and the read-after-write syn- HEAD, TAIL : integers initialized to 1
chronization pattern (also known as the flag principle [22, 32]), in operation enqueue(𝑥𝑖 ) is
which a process writes in a shared variable 𝐴 and then reads another (09) 𝑡𝑖 ← read(TAIL)
shared variable 𝐵.3 (10) write(ITEMS[𝑡𝑖 ] , 𝑥𝑖 )
(11) write(TAIL, 𝑡𝑖 + 1)
Herlihy [21] showed that any linearizable nonblocking implemen- (12) return true
tation of a queue or stack cannot use only the simple read/write end operation
primitives, it must use more powerful read-modify-write primitives. operation dequeue() is
In the same direction, Attiya et al. [5] proved that any linearizable (13) ℎ𝑖 ← max_read(HEAD)
implementation with the minimal progress guarantees of a set, stack, (14) 𝑟𝑖 ← read(ITEMS[ℎ𝑖 ] )
(15) if 𝑟𝑖 ≠ ⊥ then
queue or work stealing must use either read-after-write synchroniza- (16) max_write(HEAD, ℎ𝑖 + 1)
tion patterns or read-modify-write primitives. (17) return 𝑟𝑖
Recently, set-linearizability has been used to define relaxations of (18) end if
(19) return empty
queues and stacks that admit set-linearizable implementations that use end operation
only read/write primitives and without read-after-write synchroniza-
tion patterns, hence evading the aforementioned impossibility results.
Figure 9: A set-linearizable implementation of a single-enqueuer
Intuitively, in a queue with multiplicity [9], the usual definition of a
queue with multiplicity (code for process 𝑝𝑖 ).
sequential queue is relaxed in a way that distinct dequeue operation
calls can return the same item, but this can happen only if they are
In the implementation, whenever the enqueuer wants to enqueue
concurrent. Then, dequeue operations returning the same item belong
an item, it first reads the current value 𝑡 of TAIL, then stores its item
to the same concurrency class. In all other cases, the object behaves
𝑥 in ITEMS[𝑡] and finally increments TAIL by one (Lines 9 to 11).
like a usual sequential queue. The expressiveness of set-linearizability
A dequeue operation first reads the current value ℎ of HEAD using
allows to precisely specify that the relaxation can happen only in case
max_read and then reads the value 𝑥 in ITEMS[ℎ] (Line 13 and 14);
of concurrency.
if 𝑥 is distinct from ⊥, then 𝑥 is an item that has been enqueued
As an example of a set-linearizable implementation, consider the
and the operation return 𝑥, after it increments HEAD by one using
simple implementation of a single-enqueuer queue with multiplicity
max_write which logically “marks” the item in position ITEMS[ℎ]
in Figure 9. Single-enqueuer means that there is one distinguished
as taken (Lines 16 and 17), otherwise 𝑥 is equal to ⊥, which means
process, called the enqueuer, that can invoke the enqueue operation.
that the queue is empty as HEAD has “surpassed” TAIL, and hence
As explained below, the implementation uses only read/write primi-
the operations returns empty (Line 19).
tives and is devoid of read-after-write synchronization patterns. (For
Two or more concurrent enqueue operation calls can return the
clarity, we consider here the single-enqueuer case but it has been
same item. For example, the operations can read one after the other, in
shown that there are set-linearizable implementations for the multi-
some arbitrary order, the same value ℎ from HEAD in Line 13 and then
enqueuer case with similar properties [9]). It merits mention that the
read one after the other, again in some order, the value in ITEMS[ℎ]
impossibility results in [5, 21] apply also for the single-enqueuer case.
in Line 14. Namely, all these operations read the item in ITEMS[ℎ]
The implementation in Figure 9 is derived from the implementations
before the first of them “marks” the item in ITEMS[ℎ] as taken by
in [7], where work-stealing with multiplicity is studied, and shown to
updating HEAD using max_write in Line 16. Due to the semantics
be useful to derive relaxed work-stealing implementations with better
of max_read and max_write, HEAD only “moves forward”, hence a
performance than classic (i.e. non-relaxed) work-stealing solutions,
“slow” dequeue operation cannot write a small value in HEAD that
when solving problems such as parallel spanning tree.
could cause another (possibly non-concurrent) dequeue operation to
3 The read-after-write synchronization pattern has been used in many algorithms, starting
return an item that has already been dequeued. Therefore, enqueue
with the first mutual exclusion solutions.
operation calls that return the same item can be linearized at the same
linearization point, i.e. in the same concurrency class.
4 SET-LINEARIZABILITY
7

4 INTERVAL-LINEARIZABILITY Shared variables:


𝐴[1, . . . , 𝑛] : array of integers initialized to [0, . . . , 0]
Set-sequential specifications are more expressive than sequential ones,
but there are situations where an operation appears as being executed operation update(𝑥𝑖 ) is
(20) 𝑎𝑖 ← read(𝐴[𝑖 ] )
concurrently with a sequence of several sequentially executed oper- (21) write(𝐴[𝑖 ], 𝑎𝑖 + 𝑥𝑖 )
ations, as discussed in Section 2. An interval-sequential specifica- (22) return true
tion [8] defines a sequence of concurrency classes, with the possibility end operation
that an operation has its invocation in one concurrency class and its operation query() is
response in a later concurrency class, implying that it executed over an (23) 𝑠𝑖 ← 0
(24) for each 𝑟𝑖 ∈ {1, . . . , 𝑛} do
interval of time instead of a single point. Figure 10 takes the view of (25) 𝑠𝑖 ← 𝑠𝑖 + read(𝐴[𝑟𝑖 ] )
a poset determined by the operation intervals, and the corresponding (26) end for
order diagram, for the three types of specifications4 . (27) return 𝑠𝑖
end operation

c
a b c
b
Figure 11: An interval-linearizable implementation of a batched
counter (code for process 𝑝𝑖 ).
Linearizability
<latexit sha1_base64="Xox5ZA3FBNf2HjXas4Ys/NDo1Qs=">AAAB/3icbVDLSgMxFM34rPU1KrhxEyyCqzJjRV0W3LhwUcE+oB1KJs20oXkMSUYcxy78FTcuFHHrb7jzb0zbWWjrgcDhnHu4NyeMGdXG876dhcWl5ZXVwlpxfWNza9vd2W1omShM6lgyqVoh0oRRQeqGGkZasSKIh4w0w+Hl2G/eEaWpFLcmjUnAUV/QiGJkrNR19zs8lPfZtY0jRR9QSBk16ajrlryyNwGcJ35OSiBHret+dXoSJ5wIgxnSuu17sQkypAzFjIyKnUSTGOEh6pO2pQJxooNscv8IHlmlByOp7BMGTtTfiQxxrVMe2kmOzEDPemPxP6+dmOgiyKiIE0MEni6KEgaNhOMyYI8qgg1LLUFYUXsrxAOkEDa2sqItwZ/98jxpnJT9s3Ll5rRUreR1FMABOATHwAfnoAquQA3UAQaP4Bm8gjfnyXlx3p2P6eiCk2f2wB84nz8E+pa2</latexit>

e f in Figure 11. The relaxation is formally defined through the inter-


a e
b d d mediate value (IV) linearizability formalism introduced in [33], an
c f extension of linearizability for quantitative data processing sequential
a b c
objects. Loosely speaking, IV-linearizability admits update operation
Set-linearizability
<latexit sha1_base64="FyYvX345G/VZevHjGjgWfsLlNNw=">AAACA3icbVDLSsNAFJ3UV62vqDvdBIvgxpJYUZcFNy4r2ge0oUymk3boTBJmbsQYCm78FTcuFHHrT7jzb5y0WWjrgQuHc+6dufd4EWcKbPvbKCwsLi2vFFdLa+sbm1vm9k5ThbEktEFCHsq2hxXlLKANYMBpO5IUC4/Tlje6zPzWHZWKhcEtJBF1BR4EzGcEg5Z65l5XeOF9ekPhOHsCS/aAPcYZJOOeWbYr9gTWPHFyUkY56j3zq9sPSSxoAIRjpTqOHYGbYgmMcDoudWNFI0xGeEA7mgZYUOWmkxvG1qFW+pYfSl0BWBP190SKhVKJ8HSnwDBUs14m/ud1YvAv3JQFUQw0INOP/JhbEFpZIFafSUqAJ5pgIpne1SJDLDEBHVtJh+DMnjxPmicV56xSvT4t16p5HEW0jw7QEXLQOaqhK1RHDUTQI3pGr+jNeDJejHfjY9paMPKZXfQHxucP+ReYVw==</latexit>

calls to return a value that approximates the correct response; the


c e sequential specification of the object we seek to implement defines
a c
the correct responses. An execution is IV-linearizable if the output of
d d
b e every update operation lies in an interval defined by two sequential
a b executions of the object.
Interval-linearizability
<latexit sha1_base64="Lu8qPcARSHYCif/j698NDLcrRFY=">AAACCHicbVC7TsMwFHV4lvIKMDIQUSGxUCUUAWMlFtiKRB9SG1U3rtNadZzIdipC1JGFX2FhACFWPoGNv8FpM0DLka50dM699r3HixiVyra/jYXFpeWV1cJacX1jc2vb3NltyDAWmNRxyELR8kASRjmpK6oYaUWCQOAx0vSGV5nfHBEhacjvVBIRN4A+pz7FoLTUNQ86gRfepzdcETECdpK9A4I+gEcZVcm4a5bssj2BNU+cnJRQjlrX/Or0QhwHhCvMQMq2Y0fKTUEoihkZFzuxJBHgIfRJW1MOAZFuOjlkbB1ppWf5odDFlTVRf0+kEEiZBJ7uDEAN5KyXif957Vj5l25KeRQrwvH0Iz9mlgqtLBWrRwXBiiWaABZU72rhAQjAOhZZ1CE4syfPk8Zp2TkvV27PStVKHkcB7aNDdIwcdIGq6BrVUB1h9Iie0St6M56MF+Pd+Ji2Lhj5zB76A+PzBxvQmqI=</latexit>

The relaxed batched counter can be specified by an interval-


sequential automaton. In each valid interval-sequential execution,
Figure 10: Interval poset vs order diagram. update operations happen atomically and sequentially, namely, each
such operation spans a single concurrency class, and no more than
one update operation appears in a concurrency class. A query oper-
The batched counter. The rapid increase of data production nowa-
ation, however, can span several concurrency classes, and its output
days naturally asks for parallelization of computations in big data
value lies in the interval defined by the contents of the counter when
processing systems, in order to achieve timely responsiveness. A com-
the operation starts and terminates, respectively; a query operation
mon task in such systems is that of counting events in batches (e.g.
can also appear in a single concurrency class, denoting that it is not
number of queries of a website) for doing some statistical analysis
concurrent with any other operation, and hence it must return the
later. The task is modeled in the batched counter, a sequential object
current value of the counter in this case.
that stores an integer 𝑅 initialized to 0, and provides two operations,
Figure 12 depicts an interval-linearizable execution of the relaxed
update(𝑥) that increments 𝑅 by 𝑥, and query() that returns the current
batched counter. The first query operation by 𝑝 3 returns value 10
value of 𝑅. One would like to have a concurrent implementation that
because it is implemented by reading the update(2) of 𝑝 1 and then
allows several processes to concurrently and rapidly update and query
the two update operations of 𝑝 2 . This query() → 10 operation cannot
the counter. It turns out that linearizability prevents the existence of
be linearized at a single point: update(1) of 𝑝 1 must happen before
efficient linearizable implementations of the batched counter using the
the query() → 8 of 𝑝 2 , which in turn happens before the update(3)
simple read/write primitives. Rinberg and Keidar [33] proved that
of 𝑝 2 .
for any linearizable read/write implementation of batched counter
The implementation of the batched counter in Figure 11 from [33]
for 𝑛 processes that is wait-free, the update operation (arguably the
is indeed interval-linearizable. In an interval-linearization of an execu-
most frequently called operation in a big data processing system)
tion of the implementation, the update operation calls are sequentially
has step complexity5 Ω(𝑛). This lower bound calls for well-defined
ordered according to the moment they execute Line 21 (each opera-
relaxations of the objects that admit efficient implementations.
tion has its own concurrency class), while each query operation call
Indeed, Rinberg and Keidar proposed a relaxation of the batched
is interval-linearized to the interval of that sequence that spans the
counter that has a wait-free read/write implementation with constant
update operations that are concurrent to it.
step complexity of its update operation. The implementation appears
To conclude our example, we observe that the implementation in
4 There has been a lot of work on interval orders, both due to their mathematical interest Figure 11 has a drawback: the step complexity of its query operation
and because of their applications to biology, algorithms, psychology, etc. The interval
order for a collection of intervals on the real line is the partial order corresponding to is linear in the number of processes 𝑛. This drawback can be solved
their left-to-right precedence relation, one interval being considered less than another, if as follows.
it is completely to the left of the other. First we note that using the atomic read-modify-write fetch&inc
5 The step complexity of an operation is the worst-case number of primitive steps required
to complete the operation. primitive, it is easy to obtain a linearizable implementation of the
4 INTERVAL-LINEARIZABILITY
8

<latexit sha1_base64="E3i5lcRQezXKiU6x1BzaqiJiEmU=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0Wom5L4XhbcuKxgH9CGMplM2qGTSZi5KZTQP3HjQhG3/ok7/8ZJm4W2HrhwOOde7r3HTwTX4DjfVmltfWNzq7xd2dnd2z+wD4/aOk4VZS0ai1h1faKZ4JK1gINg3UQxEvmCdfzxfe53JkxpHssnmCbMi8hQ8pBTAkYa2HbW1yFOk4AAm9Xc88rArjp1Zw68StyCVFGB5sD+6gcxTSMmgQqidc91EvAyooBTwWaVfqpZQuiYDFnPUEkipr1sfvkMnxklwGGsTEnAc/X3REYiraeRbzojAiO97OXif14vhfDOy7hMUmCSLhaFqcAQ4zwGHHDFKIipIYQqbm7FdEQUoWDCykNwl19eJe2LuntTv3y8qjauizjK6ASdohpy0S1qoAfURC1E0QQ9o1f0ZmXWi/VufSxaS1Yxc4z+wPr8AR9QkqI=</latexit>

<latexit sha1_base64="hMmqaLa9cIDW/rj79RTcCII2mUU=">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1iEuilJfS4LblxWsA9oQ5lMJu3QySTMTIQa+iVuXCji1k9x5984bbPQ1gMXDufcy733+AlnSjvOt1VYW9/Y3Cpul3Z29/bL9sFhW8WpJLRFYh7Lro8V5UzQlmaa024iKY58Tjv++Hbmdx6pVCwWD3qSUC/CQ8FCRrA20sAuZ30VojQJsKbTav1sYFecmjMHWiVuTiqQozmwv/pBTNKICk04VqrnOon2Miw1I5xOS/1U0QSTMR7SnqECR1R52fzwKTo1SoDCWJoSGs3V3xMZjpSaRL7pjLAeqWVvJv7n9VId3ngZE0mqqSCLRWHKkY7RLAUUMEmJ5hNDMJHM3IrICEtMtMmqZEJwl19eJe16zb2qnd9fVBqXeRxFOIYTqIIL19CAO2hCCwik8Ayv8GY9WS/Wu/WxaC1Y+cwR/IH1+QPnLZKP</latexit>

update(2) update(1)
<latexit sha1_base64="RTlwsySYxzbtm2NRB4vObtxuXuo=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPS9frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPEzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa1a1buqXtxfVuq1PI4inMApnIMH11CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMMf+B8/gD9PY2Q</latexit>

p1
specification must satisfy all properties. Therefore, in a formal sense,
interval-sequential specifications are fully general.
query() ! 8 update(3)
<latexit sha1_base64="0x+rc0ZSXK35X6osBPlVVRVwEpc=">AAAB+HicbVDLSsNAFJ3UV62PRl26GSxC3ZTE+loW3LisYB/QhjKZTNqhk0mYuRFq6Je4caGIWz/FnX/jtM1CWw9cOJxzL/fe4yeCa3Ccb6uwtr6xuVXcLu3s7u2X7YPDto5TRVmLxiJWXZ9oJrhkLeAgWDdRjES+YB1/fDvzO49MaR7LB5gkzIvIUPKQUwJGGtjlrK9DnCYBATat1s8GdsWpOXPgVeLmpIJyNAf2Vz+IaRoxCVQQrXuuk4CXEQWcCjYt9VPNEkLHZMh6hkoSMe1l88On+NQoAQ5jZUoCnqu/JzISaT2JfNMZERjpZW8m/uf1UghvvIzLJAUm6WJRmAoMMZ6lgAOuGAUxMYRQxc2tmI6IIhRMViUTgrv88ippn9fcq1r9/qLSuMzjKKJjdIKqyEXXqIHuUBO1EEUpekav6M16sl6sd+tj0Vqw8pkj9AfW5w/ospKQ</latexit>

<latexit sha1_base64="DQyVW7VIVBA+UU7ytok8EBbzMUU=">AAACBHicbVC7TsMwFHV4lvAKMHaxqJDKUiWoQMdKLIxFog+pjSrHdVqrjh1sBxRFHVj4FRYGEGLlI9j4G9w2A7Qc6UpH59yre+8JYkaVdt1va2V1bX1js7Blb+/s7u07B4ctJRKJSRMLJmQnQIowyklTU81IJ5YERQEj7WB8NfXb90QqKvitTmPiR2jIaUgx0kbqO8Wsp0J4lxCZTsqndk/S4UgjKcUDrPWdkltxZ4DLxMtJCeRo9J2v3kDgJCJcY4aU6npurP0MSU0xIxO7lygSIzxGQ9I1lKOIKD+bPTGBJ0YZwFBIU1zDmfp7IkORUmkUmM4I6ZFa9Kbif1430WHNzyiPE004ni8KEwa1gNNE4IBKgjVLDUFYUnMrxCMkEdYmN9uE4C2+vExaZxXvolK9qZbq53kcBVAEx6AMPHAJ6uAaNEATYPAInsEreLOerBfr3fqYt65Y+cwR+APr8wfeSZeQ</latexit>

<latexit sha1_base64="7YjLAV+30y/DHcM1rhfyiSTKOv4=">AAAB+HicbVBNS8NAEN3Ur1o/GvXoZbEI9VISteqx4MVjBfsBbSibzaZdutmE3YlQQ3+JFw+KePWnePPfuG1z0NYHA4/3ZpiZ5yeCa3Ccb6uwtr6xuVXcLu3s7u2X7YPDto5TRVmLxiJWXZ9oJrhkLeAgWDdRjES+YB1/fDvzO49MaR7LB5gkzIvIUPKQUwJGGtjlrK9DnCYBATat1s8GdsWpOXPgVeLmpIJyNAf2Vz+IaRoxCVQQrXuuk4CXEQWcCjYt9VPNEkLHZMh6hkoSMe1l88On+NQoAQ5jZUoCnqu/JzISaT2JfNMZERjpZW8m/uf1UghvvIzLJAUm6WJRmAoMMZ6lgAOuGAUxMYRQxc2tmI6IIhRMViUTgrv88ippn9fcq9rF/WWlUc/jKKJjdIKqyEXXqIHuUBO1EEUpekav6M16sl6sd+tj0Vqw8pkj9AfW5w/rvJKS</latexit>

update(5)
p2
5 CONCLUSION
<latexit sha1_base64="g9yGmT3RWaScgXJIPfDkeOTxXxE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqqMeCF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPRr/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8Slq1qndVvbi/rNRreRxFOIFTOAcPrqEOd9CAJjAYwjO8wpsjnBfn3flYtBacfOYY/sD5/AH+wY2R</latexit>

query() ! 11 This article has presented two known extensions of linearizability,


<latexit sha1_base64="jgDOu80a3kGnW1ue5GwlW5TWUDM=">AAACBXicbVC7TsMwFHXKq4RXgBEGiwqpLFXCe6zEwlgk+pCaqHJcp7Xq2MF2QFHUhYVfYWEAIVb+gY2/wX0M0HKkKx2dc6/uvSdMGFXadb+twsLi0vJKcdVeW9/Y3HK2dxpKpBKTOhZMyFaIFGGUk7qmmpFWIgmKQ0aa4eBq5DfviVRU8FudJSSIUY/TiGKkjdRx9nNfRfAuJTIblo9sX9JeXyMpxQP0vI5TcivuGHCeeFNSAlPUOs6X3xU4jQnXmCGl2p6b6CBHUlPMyND2U0UShAeoR9qGchQTFeTjL4bw0ChdGAlpims4Vn9P5ChWKotD0xkj3Vez3kj8z2unOroMcsqTVBOOJ4uilEEt4CgS2KWSYM0yQxCW1NwKcR9JhLUJzjYheLMvz5PGccU7r5zcnJaqZ9M4imAPHIAy8MAFqIJrUAN1gMEjeAav4M16sl6sd+tj0lqwpjO74A+szx9MuZfD</latexit>

query() ! 10
<latexit sha1_base64="qhyeaX9jeoqN2qBKsH96enokFzw=">AAACBXicbVC7TsMwFHXKq4RXgBEGiwqpLFWCymOsxMJYJPqQmqhyXKe16tjBdkBR1IWFX2FhACFW/oGNv8F9DNBypCsdnXOv7r0nTBhV2nW/rcLS8srqWnHd3tjc2t5xdveaSqQSkwYWTMh2iBRhlJOGppqRdiIJikNGWuHwauy37olUVPBbnSUkiFGf04hipI3UdQ5zX0XwLiUyG5VPbF/S/kAjKcUD9NyuU3Ir7gRwkXgzUgIz1LvOl98TOI0J15ghpTqem+ggR1JTzMjI9lNFEoSHqE86hnIUExXkky9G8NgoPRgJaYprOFF/T+QoViqLQ9MZIz1Q895Y/M/rpDq6DHLKk1QTjqeLopRBLeA4EtijkmDNMkMQltTcCvEASYS1Cc42IXjzLy+S5mnFO69Ub6ql2tksjiI4AEegDDxwAWrgGtRBA2DwCJ7BK3iznqwX6936mLYWrNnMPvgD6/MHS4eXww==</latexit>

p3
called set-linearizability (that captures simultaneity) and interval-
<latexit sha1_base64="+ylXWrFhOQRIQ4cMwKFSd31eEkk=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0laUY8FLx4r2lpoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E1DDpVC8hQIl7ySa0yiQ/DEY38z8xyeujYjVA04S7kd0qEQoGEUr3Sf9er9ccavuHGSVeDmpQI5mv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65F1LFY248bP5qVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJllbyb+53VTDK/9TKgkRa7YYlGYSoIxmf1NBkJzhnJiCWVa2FsJG1FNGdp0SjYEb/nlVdKuVb3Lav3uotKo5XEU4QRO4Rw8uIIG3EITWsBgCM/wCm+OdF6cd+dj0Vpw8plj+APn8wcAVI2S</latexit>

linearizability (that captures time-ubiquity), together with the cor-


<latexit sha1_base64="3NU+OKvOcXx1EvIf8qRJMY0oYcA=">AAAB+HicbVBNTwIxEO3iF+IHqx69NBITvJBdNOqRxItHTARJYEO6pQsN3e6mnZrghl/ixYPGePWnePPfWGAPCr5kkpf3ZjIzL0wF1+B5305hbX1jc6u4XdrZ3dsvuweHbZ0YRVmLJiJRnZBoJrhkLeAgWCdVjMShYA/h+GbmPzwypXki72GSsiAmQ8kjTglYqe+Ws56OsEkHBNi0Wj/ruxWv5s2BV4mfkwrK0ey7X71BQk3MJFBBtO76XgpBRhRwKti01DOapYSOyZB1LZUkZjrI5odP8alVBjhKlC0JeK7+nshIrPUkDm1nTGCkl72Z+J/XNRBdBxmXqQEm6WJRZASGBM9SwAOuGAUxsYRQxe2tmI6IIhRsViUbgr/88ipp12v+Ze387qLSqOZxFNExOkFV5KMr1EC3qIlaiCKDntErenOenBfn3flYtBacfOYI/YHz+QPjRJKC</latexit>

update(2)
<latexit sha1_base64="NwJUIw2iktEmw8TVruBTqJ/sNak=">AAAB+HicbVDLSgNBEJz1GeMjqx69DAYhXsKu72PAi8cI5gHJEmYns8mQ2dllpkeIS77EiwdFvPop3vwbJ8keNLGgoajqprsrTAXX4Hnfzsrq2vrGZmGruL2zu1dy9w+aOjGKsgZNRKLaIdFMcMkawEGwdqoYiUPBWuHoduq3HpnSPJEPME5ZEJOB5BGnBKzUc0tZV0fYpH0CbFK5PO25Za/qzYCXiZ+TMspR77lf3X5CTcwkUEG07vheCkFGFHAq2KTYNZqlhI7IgHUslSRmOshmh0/wiVX6OEqULQl4pv6eyEis9TgObWdMYKgXvan4n9cxEN0EGZepASbpfFFkBIYET1PAfa4YBTG2hFDF7a2YDokiFGxWRRuCv/jyMmmeVf2r6vn9RblWyeMooCN0jCrIR9eohu5QHTUQRQY9o1f05jw5L8678zFvXXHymUP0B87nD+fTkoU=</latexit>

update(5) update(1) query() ! 8 update(3) query() ! 11


<latexit sha1_base64="37TBU3epw6t3dnob63Kya2S2J1A=">AAAB+3icbVBNS8NAEN3Urxq/aj16WSxCvZRERT0WvHisYD+gDWWz2bRLN5uwOxFLyF/x4kERr/4Rb/4bt20O2vpg4PHeDDPz/ERwDY7zbZXW1jc2t8rb9s7u3v5B5bDa0XGqKGvTWMSq5xPNBJesDRwE6yWKkcgXrOtPbmd+95EpzWP5ANOEeREZSR5ySsBIw0o1G+gQp0lAgOV198zGeFipOQ1nDrxK3ILUUIHWsPI1CGKaRkwCFUTrvusk4GVEAaeC5fYg1SwhdEJGrG+oJBHTXja/PcenRglwGCtTEvBc/T2RkUjraeSbzojAWC97M/E/r59CeONlXCYpMEkXi8JUYIjxLAgccMUoiKkhhCpubsV0TBShYOKyTQju8surpHPecK8aF/eXtWa9iKOMjtEJqiMXXaMmukMt1EYUPaFn9IrerNx6sd6tj0VrySpmjtAfWJ8/0YOS6Q==</latexit> <latexit sha1_base64="8Tz7jv8QBN5vVJGCORCceFSSOF4=">AAACBHicbVDLSsNAFJ3UV42vqMtuBotQNyWRol0W3LisYB/QlDKZTtqhk5k4M1FC6MKNv+LGhSJu/Qh3/o3TNgttPXDhcM693HtPEDOqtOt+W4W19Y3NreK2vbO7t3/gHB61lUgkJi0smJDdACnCKCctTTUj3VgSFAWMdILJ1czv3BOpqOC3Oo1JP0IjTkOKkTbSwCllvgrhXUJkOq2c2b6ko7FGUooHWB84ZbfqzgFXiZeTMsjRHDhf/lDgJCJcY4aU6nlurPsZkppiRqa2nygSIzxBI9IzlKOIqH42f2IKT40yhKGQpriGc/X3RIYipdIoMJ0R0mO17M3E/7xeosN6P6M8TjTheLEoTBjUAs4SgUMqCdYsNQRhSc2tEI+RRFib3GwTgrf88ippn1e9i2rtplZuVPI4iqAETkAFeOASNMA1aIIWwOARPINX8GY9WS/Wu/WxaC1Y+cwx+APr8wfaYJeD</latexit> <latexit sha1_base64="+DbACUd3yL8cfqymkP8+y8qfkuE=">AAAB+HicbVBNSwMxEM36WetHVz16CRahXsquFfVY8OKxgv2AdinZNNuGZrNLMhHq0l/ixYMiXv0p3vw3pu0etPXBwOO9GWbmhangGjzv21lb39jc2i7sFHf39g9K7uFRSydGUdakiUhUJySaCS5ZEzgI1kkVI3EoWDsc38789iNTmifyASYpC2IylDzilICV+m4p6+kIm3RAgE0rtfO+W/aq3hx4lfg5KaMcjb771Rsk1MRMAhVE667vpRBkRAGngk2LPaNZSuiYDFnXUklipoNsfvgUn1llgKNE2ZKA5+rviYzEWk/i0HbGBEZ62ZuJ/3ldA9FNkHGZGmCSLhZFRmBI8CwFPOCKURATSwhV3N6K6YgoQsFmVbQh+Msvr5LWRdW/qtbuL8v1Sh5HAZ2gU1RBPrpGdXSHGqiJKDLoGb2iN+fJeXHenY9F65qTzxyjP3A+fwDkyZKD</latexit> <latexit sha1_base64="oZrBoSmoHy6zkILT1at9J1eoWzE=">AAACBXicbVC7TsMwFHV4lvAKMMJgUSGVpUoAAWMlFsYi0YfURJXjOq1Vxw62A4qiLiz8CgsDCLHyD2z8DW6bAVqOdKWjc+7VvfeECaNKu+63tbC4tLyyWlqz1zc2t7adnd2mEqnEpIEFE7IdIkUY5aShqWaknUiC4pCRVji8GvuteyIVFfxWZwkJYtTnNKIYaSN1nYPcVxG8S4nMRpVj25e0P9BISvEAPa/rlN2qOwGcJ15ByqBAvet8+T2B05hwjRlSquO5iQ5yJDXFjIxsP1UkQXiI+qRjKEcxUUE++WIEj4zSg5GQpriGE/X3RI5ipbI4NJ0x0gM1643F/7xOqqPLIKc8STXheLooShnUAo4jgT0qCdYsMwRhSc2tEA+QRFib4GwTgjf78jxpnlS98+rpzVm5ViniKIF9cAgqwAMXoAauQR00AAaP4Bm8gjfryXqx3q2PaeuCVczsgT+wPn8ASNCXtg==</latexit>
responding formalisms to define more general concurrent objects:
query() ! 10
<latexit sha1_base64="lBiYM8PTkyTEp3X2JJ+XdymTzAQ=">AAACBXicbVDLSsNAFJ34rPEVdamLwSLUTUmkqMuCG5cV7AOaUCbTSTt0MhNnJkoI3bjxV9y4UMSt/+DOv3HaZqGtBy4czrmXe+8JE0aVdt1va2l5ZXVtvbRhb25t7+w6e/stJVKJSRMLJmQnRIowyklTU81IJ5EExSEj7XB0NfHb90QqKvitzhISxGjAaUQx0kbqOUe5ryJ4lxKZjSunti/pYKiRlOIBem7PKbtVdwq4SLyClEGBRs/58vsCpzHhGjOkVNdzEx3kSGqKGRnbfqpIgvAIDUjXUI5iooJ8+sUYnhilDyMhTXENp+rviRzFSmVxaDpjpIdq3puI/3ndVEeXQU55kmrC8WxRlDKoBZxEAvtUEqxZZgjCkppbIR4iibA2wdkmBG/+5UXSOqt659XaTa1crxRxlMAhOAYV4IELUAfXoAGaAINH8AxewZv1ZL1Y79bHrHXJKmYOwB9Ynz9Hnpe2</latexit>

set-sequential automaton and interval-sequential automaton. This


extended linearizability framework preserves the benefits of compos-
Figure 12: An interval-linearizable execution of a batched counter ability, nonblockingness and the notion of state. The paper surveyed
object. recent work that has already been taking advantage of this approach,
but there seem to be many more opportunities.
There is a very active current trend by practitioners to move away
batched counter with constant step complexity in both its update from sequential specifications, due to the performance limitations,
and query operations. The fetch&inc(𝑅, 𝑑) primitive atomically re- and even for simplicity, where it would be interesting to explore the
turns the current value of 𝑅 and adds 𝑑 to 𝑅. In a simple linearizable use of the extended linearizability framework. Notable is the history
implementation of the batched counter, there is a shared variable 𝐴 of blockchain technology, which started with Bitcoin and its para-
initialized to zero, and update(𝑥𝑖 ) simply performs fetch&inc(𝐴, 𝑥𝑖 ), digm of sequentializing ALL monetary transactions in the system
while update(𝑥𝑖 ) simply reads 𝐴, i.e. read(𝐴). Despite the good the- via tremendously energy consuming consensus mining algorithms,
oretical properties of this simple implementation, it does not perform towards recent efforts of allowing for concurrent ledgers cooperation
well in practice as all processes work on the shared variable 𝐴 which (e.g. [38]), and ledgers restricted to only monetary transactions that
becomes a bottleneck, creating high contention in real multicore ar- do not need consensus (e.g. [6, 10]). The project CALM of Heller-
chitectures. stein and Alvaro focuses on the class of programs that can achieve
An intermediate solution between the linearizable solution above, distributed consistency without the use of coordination [19]. Conflict-
and the one in Figure 11 consists in having an array 𝐴 of length 𝐾 free replicated data types [34] provide another interesting direction
(instead of length 𝑛 as in the implementation in Figure 11), where for future work, e.g. [16, 26]. Their benefits of commutativity have
𝐾 is a system-dependent constant (or maybe a sublinear function); been extended to composable libraries and languages, enabling pro-
update(𝑥𝑖 ) first randomly picks an entry 𝐴[𝑘𝑖 ] of 𝐴 and performs grammers to reason about correctness of whole programs in languages
fetch&inc(𝐴[𝑘𝑖 ], 𝑥𝑖 ), while update(𝑥𝑖 ) returns the sum of the 𝐾 en- like Bloom [19]. In the context of distributed storage systems, large
tries of 𝐴, similarly to update in Figure 11. The idea is to randomly fragmented objects with relaxed read operations have been introduced
spread the contention over the distinct components of 𝐴. The imple- in [13], which admit efficient implementations. Another recent trend
mentation has good properties: it retains wait-freedom and interval- is relaxed specifications e.g. [18]. There have been several studies on
linearizability, and has a constant step complexity in both operations. relaxation in the shared-memory context, focusing on SkipLists, log-
Completeness results for interval-linearizability. It is known that structured merge trees and other sequential data structures. Another
interval-sequential specifications are complete in the sense that they line of research consists of looking for possible links between the
are powerful enough to specify any concurrent object given by a set presented linearizability hierarchy and the notions of strict lineariz-
of concurrent executions, i.e. sequences of invocations and responses. ability [1], durable linearizability [25], recoverable linearizability [4].
We will say that such specifications are set-based. Arguably, set-based
specifications, proposed by Lamport [27],6 are the most general way ACKNOWLEDGMENTS
to define a concurrent object. Such a set specifies all the concurrent This work has been partially supported by the French projects BYB-
behaviors that are considered valid for any concurrent algorithm LOS (ANR-20-CE25-0002-01) and PriCLeSS (ANR-10-LABX-07-
implementing the object. For example, the set-based specification of 81) devoted to the design of modular distributed computing building
a FIFO queue contains all executions that are linearizable, while the blocks, and UNAM-PAPIIT projects IN106520 and IN108720.
set-based specification of a FIFO queue with multiplicity contains all
executions that are set-linearizable.
It turns out that interval-sequential specifications are able to model
any set-based specification having some reasonable properties, like
non-emptyness, prefix-closure and well-formedness (i.e. each pro-
cess alternates between issuing invocations and responses, starting
with an invocation). The result was originally proved in [8] under
some assumptions, and later generalized by Goubault, Ledent and
Mimram [17]. Furthermore, they prove that in any reasonable compu-
tational shared memory model, every algorithm for a given set-based
6 Lamport originally used his happened-before relation, while from Herlihy and Wing [24]
on, the string approach is predominant in the literature.

5 CONCLUSION
9

REFERENCES Construction (MPC 92), Springer Verlag LNCS 669, pp. 14-17 (1993)
[1] Aguilera M. K. and Frølund S., Strict linearizability and the power of aborting. [30] Neiger G., Set linearizability (Brief announcement). Proc. 13th annual ACM sym-
Technical Report HPL-2003-241. Hewlett-Packard Labs (2003) posium on Principles of distributed computing (PODC’94), Brief announcement,
[2] Aspnes J., Attiya H., and Censor-Hillel K., Polylogarithmic concurrent data struc- ACM Press, page 396 (1994)
tures from monotone circuits. Journal of the ACM, 59(1), pp. 2:1–2:24 (2012) [31] Rajsbaum S. and Raynal M., Mastering concurrent computing through sequential
[3] Afek Y., Attiya H., Dolev D., Gafni E., Merritt M., and Shavit N., Atomic snapshots thinking: a half-century evolution. Comm. of the ACM, Vol. 63(1):78-87 (2020)
of shared memory. Journal of the ACM, 40(4):873-890 (1993) [32] Raynal M., Concurrent programming: algorithms, principles and foundations.
[4] Attiya H., Ben-Baruch O., and Hendler D., Nesting-Safe Recoverable Linearizabil- Springer, 515 pages, ISBN 978-3-642-32026-2 (2013)
ity: Modular Constructions for Non-Volatile Memory. Proc. 37th ACM Symposium [33] Rinberg A. and Keidar I., Intermediate value linearizability: a quantitative correct-
on Principles of Distributed Computing (PODC’18), ACM Press, pp. 7–16 (2018) ness criterion. Proc. 34th Int’l Symposium on Distributed Computing (DISC’20),
[5] Attiya H., Guerraoui R., Hendler D., Kuznetsov P., Michael M.M., and Vechev LIPICs Vol. 179, pp. 2:1–2:17 (2020)
M.T., Laws of order: expensive synchronization in concurrent algorithms cannot [34] Shapiro M., Preguiça N., Baquero C. and Zawirski M., Conflict-free replicated data
be eliminated. Proc. 38th ACM SIGPLAN-SIGACT Symposium on Principles of types. Proc. Symposium on Self-Stabilization, Safety, and Security of Distributed
Programming Languages (POPL’11), ACM Press, pp. 487-498 (2011) Systems (SSS’11), Springer LNCS 6976, Springer, pp. 386–400 (2011)
[6] Auvolat A., Frey D., Raynal M., and Taïani F., Money transfer made simple. Bulletin [35] Scherer III W. N., Lea D. and Scott M. L., Scalable synchronous queues. Communi-
of the European Association of Theo. Computer Science (EATCS), 132:22-43 (2020) cations of the ACM 52(5): 100–111 (2009)
[7] Castañeda A., Piña M., Fully read/write fence-free work-stealing with multiplicity. [36] Shavit N., Data structures in the multicore age. Communications of the ACM,
Proc. 35th Symposium on Distributed Computing (DISC’21), LIPICS Series, 16:1– 54(3):76–84 (2011)
16:20 (2021) [37] Shavit N. and Touitou D., Elimination trees and the construction of pools and stacks.
[8] Castañeda A., Rajsbaum S., and Raynal M., Unifying concurrent objects and dis- Theory Computing Systems, 30(6):645–670 (1997)
tributed tasks: interval-linearizability. Journal of the ACM, 65(6), Article 45, 42 [38] Sompolinsky Y. and Zohar A., PHANTOM: a scalable blockDAG protocol. Cryp-
pages (2018) tology ePrint Archive Report, 104, (2018)
[9] Castañeda A., Rajsbaum S., and Raynal M., Relaxed queues and stacks from [39] Viotti P. and Vukolić M., Consistency in non-transactional distributed storage
read/write operations. Proc. 24rd Int’l Conference on Principles of Distributed systems. ACM Computing Surveys 49(1), Article 19, 34 pages (2016)
Systems (OPODIS’20), LIPIcs Vol. 184, pages 13:1–13:19 (2020) [40] Zheng X., Hu G., and Garg V., Lattice agreement in message passing systems.
[10] Collins D., Guerraoui R., Komatovic J., Kuznetsov P., Monti M., Pavlovic M., Pigno- Proc. 32nd Int’l Symposium on Distributed Computing (DISC’18). LIPICs Vol. 121,
let Y-A., Seredinschi D-A., Tonkikh A., and Xygkis A, Online payments by merely pp. 23:1–23:17 (2018)
broadcasting messages. Proc. 50th IEEE/IFIP Int’l Conference on Dependable
Systems and Networks (DSN’29), IEEE Press, pp. 26-38 (2020)
[11] Ellen F., Hendler D., and and Shavit N., On the inherent sequentiality of concurrent
objects, SIAM Journal on Computing, 41(3):519-536 (2012)
[12] Faleiro, J. M., Rajamani S., Rajan,K., . Ramalingam G, and Vaswani K., General-
ized lattice agreement. Proc. 2012 ACM symposium on Principles of distributed
computing (PODC’12), ACM Press, pp. 125–134 (2012)
[13] Fernández Anta F., Georgiou Ch., Hadjistasi Th., Nicolaou N., Stavrakis E., and Tri-
georgi A., Fragmented objects: boosting concurrency of shared large objects. Proc.
28th Int’l Colloquium on Structural Information and Communication Complexity
(SIROCCO’21), Springer LNCS 12129, pp. 106–126 (2021)
[14] Filipović I., O’Hearn P., Rinetzky N., and Yang H., Abstraction for concurrent
objects. Theoretical Computer Science, 411(51–52):4379–4398 (2010)
[15] Fischer M.J., Lynch N.A., and Paterson M.S., Impossibility of distributed consensus
with one faulty process. Journal of the ACM, 32(2):374-382 (1985)
[16] Frey D., Guillou L., Raynal M., and nd Taïani F., Consensus-free ledgers: when
operations of distinct processes are commutative. Proc. 16th Int’l Conference on
Parallel Computing Technologies. Springer LNCS, 11 pages (2021)
[17] Goubault E., Ledent J., and Mimram S., Concurrent specifications beyond lin-
earizability. Proc. 22nd Int’l Conference on Principles of Distributed Systems
(OPODIS’18), LIPIcs Vol. 125, pages 28:1-28:16 (2018)
[18] Henzinger T. A., Kirsch C. M., Payer H., Sezgin A., and Sokolova A.: Quantitative
relaxation of concurrent data structures. Proc. 40th Annual ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages (POPL 2013), pp. 317–328
(2013)
[19] Hellerstein J.M. and Alvaro P., Keeping CALM: when distributed consistency is
easy. Commun. ACM, 63(9), 72–81 (September 2020).
[20] Hemed N., Rinetzky N., and Vafeiadis V., Modular verification of concurrency-
aware linearizability. Proc. 29th Symposium on Distributed Computing (DISC 2015),
Springer LNCS 9363, pp. 371-387 (2015)
[21] Herlihy M.P., Wait-free synchronization. ACM Transactions on Progr. Languages
and Systems, 13(1):124-149 (1991)
[22] Herlihy M.P. and Shavit N., The art of multiprocessor programming. Morgan
Kaufmann, 508 pages, ISBN 978-0-12-370591-4 (2008)
[23] Hendler D., Shavit N., Yerushalmi L., A scalable lock-free stack algorithm. Journal
of Parallel and Distributed Computing 70(1): 1–12 (2010)
[24] Herlihy M.P. and Wing J.M., Linearizability: a correctness condition for concurrent
objects. ACM Transactions on Progr. Languages and Systems, 12(3):463-492 (1990)
[25] Izraelevitz J., Mendes H., and Scott M. L., Linearizability of persistent memory
objects Under a full-system-crash failure model. In Proc. of the 30th International
Symposium on Distributed Computing (DISC 2016), Springer LNCS 9888, pp. 313-
327 (2016)
[26] Kuznetsov P., Rieutord TH, and Tucci Piergiovanni S., Reconfigurable lattice agree-
ment and applications. Proc. 23th Int’l Conference on Principles of Distributed
Systems (OPODIS’19), LIPIcs, pp. 31:1-31:17 (2020)
[27] Lamport L., On inter-process communications, part I: basic formalism, part II:
algorithms. Distributed Computing, 1(2), 77-101 (1986)
[28] Loui M. and Abu-Amara H., Memory requirements for agreement among unreliable
asynchronous processes, Advances in Computing Research 4,163–183 (1987)
[29] Montanari U., True concurrency: theory and practice. in Mathematics of Program

5 CONCLUSION

You might also like