Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Information and Computation 254 (2017) 105139

Contents lists available at ScienceDirect

Information and Computation


www.elsevier.com/locate/yinco

The vectorial -calculus


Pablo Arrighi a , Alejandro Daz-Caro b, , Benot Valiron c
a
Aix-Marseille Universit, CNRS, LIF, Marseille and IXXI, Lyon, France
b
Universidad Nacional de Quilmes & CONICET, B1876BXD Bernal, Buenos Aires, Argentina
c
LRI, CentraleSuplec, Univ. Paris Sud, CNRS, Universit Paris-Saclay, 91405 Orsay Cedex, France

a r t i c l e i n f o a b s t r a c t

Article history: We describe a type system for the linear-algebraic -calculus. The type system accounts for
Received 5 August 2013 the linear-algebraic aspects of this extension of -calculus: it is able to statically describe
Received in revised form 9 April 2016 the linear combinations of terms that will be obtained when reducing the programs.
Available online 23 April 2017
This gives rise to an original type theory where types, in the same way as terms, can
be superposed into linear combinations. We prove that the resulting typed -calculus
is strongly normalising and features weak subject reduction. Finally, we show how to
naturally encode matrices and vectors in this typed calculus.
2017 Elsevier Inc. All rights reserved.

1. Introduction

1.1. (Linear-)algebraic -calculi

A number of recent works seek to endow the -calculus with a vector space structure. This agenda has emerged simul-
taneously in two different contexts.

The eld of Linear Logic considers a logic of resources where the propositions themselves stand for those resources
and hence cannot be discarded nor copied. When seeking to nd models of this logic, one obtains a particular family of
vector spaces and differentiable functions over these. It is by trying to capture these mathematical structures back into
a programming language that Ehrhard and Regnier have dened the differential -calculus [28], which has an intriguing
differential operator as a built-in primitive and an algebraic module of the -calculus terms over natural numbers. Vaux
[43] has focused his attention on a differential -calculus without differential operator, extending the algebraic module
to positive real numbers. He obtained a conuence result in this case, which stands even in the untyped setting. More
recent works on this algebraic -calculus tend to consider arbitrary scalars [1,27,40].
The eld of Quantum Computation postulates that, as computers are physical systems, they may behave according to
quantum theory. It proves that, if this is the case, novel, more ecient algorithms are possible [30,39] which have no
classical counterpart. Whilst partly unexplained, it is nevertheless clear that the algorithmic speed-up arises by tapping
into the parallelism granted to us for free by the superposition principle, which states that if t and u are possible states
of a system, then so is the formal linear combination of them t + u (with and some arbitrary complex numbers,
up to a normalizing factor). The idea of a module of -terms over an arbitrary scalar eld arises quite naturally in this


Partially supported by the STIC-AmSud project FoQCoSS and PICT-PRH 2015-1208.
* Corresponding author.
E-mail addresses: pablo.arrighi@univ-amu.fr (P. Arrighi), alejandro.diaz-caro@unq.edu.ar (A. Daz-Caro), benoit.valiron@monoidal.net (B. Valiron).

http://dx.doi.org/10.1016/j.ic.2017.04.001
0890-5401/ 2017 Elsevier Inc. All rights reserved.
106 P. Arrighi et al. / Information and Computation 254 (2017) 105139

context. This was the motivation behind the linear-algebraic -calculus, or Lineal for short, by Dowek and one of the
authors [7], who obtained a conuence result which holds for arbitrary scalars and again covers the untyped setting.

These two languages are rather similar: they both merge higher-order computation, be they terminating or not, in its
simplest and most general form (namely the untyped -calculus) together with linear algebra also in its simplest and most
general form (the axioms of vector spaces). In fact they can simulate each other [8]. Our starting point is the second one,
Lineal, because its conuence proof allows arbitrary scalars and because one has to make a choice. Whether the models
developed for the rst language, and the type systems developed for the second language, carry through to one another via
their reciprocal simulations, is a topic of future investigation.

1.2. Other motivations to study (linear-)algebraic -calculi

The two languages are also reminiscent of other works in the literature:

Algebraic and symbolic computation. The functional style of programming is based on the -calculus together with a
number of extensions, so as to make everyday programming more accessible. Hence since the birth of functional pro-
gramming there have been several theoretical studies on extensions of the -calculus in order to account for basic
algebra (see for instance Doughertys algebraic extension [26] for normalising terms of the -calculus) and other basic
programming constructs such as pattern-matching [3,16], together with sometimes non-trivial associated type theo-
ries [37]. Whilst this was not the original motivation behind (linear-)algebraic -calculi, they could still be viewed as an
extension of the -calculus in order to handle operations over vector spaces and make programming more accessible
with them. The main difference in approach is that the -calculus is not seen here as a control structure which sits on
top of the vector space data structure, controlling which operations to apply and when. Rather, the -calculus terms
themselves can be summed and weighted, hence they actually are vectors, upon which they can also act.
Parallel and probabilistic computation. The above intertwinings of concepts are essential if seeking to represent parallel
or probabilistic computation as it is the computation itself which must be endowed with a vector space structure. The
ability to superpose -calculus terms in that sense takes us back to Boudols parallel -calculus [12] or de Liguoro and
Pipernos work on non-deterministic extensions of -calculus [18], as well as more recent works such as [14,24,36]. It
may also be viewed as being part of a series of works on probabilistic extensions of calculi, e.g. [13,31] and [19,21,35]
for -calculus more specically.

Hence (linear-)algebraic -calculi can be seen as a platform for various applications, ranging from algebraic computation,
probabilistic computation, quantum computation and resource-aware computation.

1.3. The language

The language we consider in this paper will be called the vectorial -calculus, denoted by vec . It is derived from Lineal [7].
This language admits the regular constructs of -calculus: variables x, y , . . ., -abstractions x.s and application (s) t. But it
also admits linear combinations of terms: 0, s + t and s are terms, where the scalar ranges over a ring. As in [7], it
behaves in a call-by-value oriented manner, in the sense that (x.r) (s + t) rst reduces to (x.r) s + (x.r) t until basis terms
(i.e. values) are reached, at which point beta-reduction applies.
The set of the normal forms of the terms can then be interpreted as a module and the term (x.r) s can be seen as the
application of the linear operator (x.r) to the vector s. The goal of this paper is to give a formal account of linear operators
and vectors at the level of the type system.

1.4. Our contributions: the types

Our goal is to characterize the vectoriality of the system of terms, as summarized by the slogan:

If s : T and t : R then s + t : T + R .
In the end we achieve a type system such that:

The typed language features a slightly weakened subject reduction property (Theorem 4.1).
The typed language features  strong normalization (cf. Theorem 5.7). 
In general, if t has type i i U i , then it must reduce to a t of the form i j i j bi j , where: the bi j s are basis terms

of unit type U i , and i j i j = i . (cf. Theorem 6.1).
In particular nite vectors and matrices and tensorial products can be encoded within vec . In this case, the type of the
encoded expressions coincides with the result of the expression (cf. Theorem 6.2).

Beyond these formal results, this work constitutes a rst attempt to describe a natural type system with type constructs
and + and to study their behaviour.
P. Arrighi et al. / Information and Computation 254 (2017) 105139 107

1.5. Directly related works

This paper is part of a research path [2,4,7,15,25,41,42] to design a typed language where terms can be linear combina-
tions of terms (they can be interpreted as probability distributions or quantum superpositions of data and programs) and
where the types capture some of this additional structure (they provide the propositions for a probabilistic or quantum logic
via CurryHoward).
Along this path, a rst step was accomplished in [4] with scalars in the type system. If is a scalar and   t : T is a
valid sequent, then   t : T is a valid sequent. When the scalars are taken to be positive real numbers, the developed
language actually provides a static analysis tool for probabilistic computation. However, it fails to address the following issue:
without sums but with negative numbers, the term representing true false, namely x. y .x x. y . y, is typed with
0 ( X ( X X )), a type which fails to exhibit the fact that we have a superposition of terms.
A second step was accomplished in [25] with sums in the type system. In this case, if   s : S and   t : T are two
valid sequents, then   s + t : S + T is a valid sequent. However, the language considered is only the additive fragment
of Lineal, it leaves scalars out of the picture. For instance, x. y .x x. y . y, does not have a type, due to its minus sign.
Each of these two contributions required renewed, careful and lengthy proofs about their type systems, introducing new
techniques.
The type system we propose in this paper builds upon these two approaches: it includes both scalars and sums of
types, thereby reecting the vectorial structure of the terms at the level of types. Interestingly, combining the two separate
features of [4,25] raises subtle novel issues, which we identify and discuss (cf. Section 3). Equipped with those two vectorial
type constructs, the type system is indeed able to capture some ne-grained information about the vectorial structure of
the terms. Intuitively, this means keeping track of both the direction and the amplitude of the terms.
A preliminary version of this paper has appeared in [5].

1.6. Plan of the paper

In Section 2, we present the language. We discuss the differences with the original language Lineal [7]. In Section 3,
we explain the problems arising from the possibility of having linear combinations of types, and elaborate a type system
that addresses those problems. Section 4 is devoted to subject reduction. We rst say why the standard formulation of
subject reduction does not hold. Second we state a slightly weakened notion of the subject reduction theorem, and we
prove this result. In Section 5, we prove strong normalisation. Finally we close the paper in Section 6 with theorems about
the information brought by the type judgements, both in the general and the nitary cases (matrices and vectors).

2. The terms

We consider the untyped language vec described in Fig. 1. It is based on Lineal [7]: terms come in two avours, basis
terms which are the only ones that will substitute a variable in a -reduction step, and general terms. We use Krivines
notation [34] for function application: The term (s) t passes the argument t to the function s.
In addition to -reduction, there are fteen rules stemming from the oriented axioms of vector spaces [7], specifying the
behaviour of sums and products. We divide the rules in groups: Elementary (E), Factorisation (F), Application (A) and the
Beta reduction (B). Essentially, the rules E and F, presented in [6], consist in capturing the equations of vector spaces in an
oriented rewrite system. For example, 0 s reduces to 0, as 0 s = 0 is valid in vector spaces. It should also be noted that this
set of algebraic rule is conuent, and does not introduce loops. In particular, the two rules stating (t + r) t + r
and t + t ( + ) t are not inverse one of the other when r = t. Indeed,
(t + t) t + t ( + ) t
but not the other what around.
The group of A rules formalize the fact that a general term t is thought of as a linear combination of terms r + r
and the face that the application is distributive on the left and on the right. When we apply s to such a superposition, (s) t
reduces to (s) r + (s) r . The term 0 is the empty linear combination of terms, explaining the last two rules of Group A.
Terms are considered modulo associativity and commutativity of the operator +, making the reduction into an AC-rewrite
system [32]. Scalars (notation , , , . . . ) form a ring (S, +, ), where the scalar 0 is the unit of the addition and 1 the
unit of the multiplication. We use the shortcut notation s t in place of s + (1) t. Note that although the typical ring we
consider in the examples is the ring of complex numbers, the development works for any ring: the ring of integer Z, the
nite ring Z/2Z . . .
The set of free variables of a term is dened as usual: the only operator binding variables is the -abstraction. The
operation of substitution on terms (notation t[b/x]) is dened in the usual way for the regular -term constructs, by
taking care of variable renaming to avoid capture. For a linear combination, the substitution is dened as follows: ( t +
r)[b/x] = t[b/x] + r[b/x].
Note that we need to choose a reduction strategy. For example, the term (x.(x) x) ( y + z) cannot reduce to both
(x.(x) x) y + (x.(x) x) z and ( y + z) ( y + z). Indeed, the former normalizes to ( y ) y + (z) z whereas the latter normalizes to
( y ) z + ( y ) y + ( z) y + ( z) z; which would break conuence. As in [4,7,25], we consider a call-by-value reduction strategy:
The argument of the application is required to be a base term, cf. Group B.
108 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Terms: r, s, t, u ::= b | (t) r | 0 | t | t + r


Basis terms: b ::= x | x.t
Group F: Group A:
Group E: t + t ( + ) t (t + r) u (t) u + (r) u
0t0 t + t ( + 1) t (t) (r + u) (t) r + (t) u
1tt t + t (1 + 1) t ( t) r (t) r
00 t+0t (t) ( r) (t) r
( t) ( ) t
(0) t 0
(t + r) t + r Group B:
(x.t) b t[b/x] (t) 0 0
tr tr tr tr tr
tr u+tu+r (u) t (u) r (t) u (r) u x.t x.r

Fig. 1. Syntax, reduction rules and context rules of vec .

2.1. Relation to Lineal

Although strongly inspired from Lineal, the language vec is closer to [4,8,25]. Indeed, Lineal considers some restrictions
on the reduction rules, for example t + t ( + ) t is only allowed when t is a closed normal term. These restrictions
are enforced to ensure conuence in the untyped setting. Consider the following example. Let Yb = (x.(b + (x) x)) x.(b +
(x) x). Then Yb reduces to b + Yb . So the term Yb Yb reduces to 0, but also reduces to b + Yb Yb and hence to b,
breaking conuence. The above restriction forbids the rst reduction, bringing back conuence. In our setting we do not
need it because Yb is not well-typed. If one considers a typed language enforcing strong normalisation, one can waive many
of the restrictions and consider a more canonical set of rewrite rules [4,8,25]. Working with a type system enforcing strong
normalisation (as shown in Section 5), we follow this approach.

2.2. Booleans in the vectorial -calculus

We claimed in the introduction that the design of Lineal was motivated by quantum computing; in this section we
develop this analogy.
Both in vec and in quantum computation one can interpret the notion of booleans. In the former we can consider the
usual booleans x. y .x and x. y . y whereas in the latter we consider the regular quantum bits true = |0 and false = |1.
In vec , a representation of if r then s else t needs to take into account the special relation between sums and applica-
tions. We cannot directly encode this test as the usual ((r) s) t. Indeed, if r, s and t were respectively the terms true, s1 + s2
and t1 + t2 , the term ((r) s) t would reduce to ((true) s1 ) t1 + ((true) s1 ) t2 + ((true) s2 ) t1 + ((true) s2 ) t2 , then to 2 s1 + 2 s2
instead of s1 + s2 . We need to freeze the computations in each branch of the test so that the sum does not distribute over
the application. For that purpose we use the well-known notion of thunks [7]: we encode the test as {((r) [s]) [t]}, where
[] is the term f . with f a fresh, unused term variable and where {} is the term ()x.x. The former freezes the
linearity while the latter releases it. Then the term if true then (s1 + s2 ) else (t1 + t2 ) reduces to the term s1 + s2 as one
could expect. Note that this test is linear, in the sense that the term if ( true + false) then s else t reduces to s + t.
This is similar to the quantum test that can be found e.g. in [2,42]. Quantum computation deals with complex, linear
combinations of terms, and a typical computation is run by applying linear unitary operations on the terms, called gates. For
example, the Hadamard gate H acts on the space of booleans spanned by true and false. It sends true to 1 (true + false)
2
and false to 1 (true false). If x is a quantum bit, the value (H) x can be represented as the quantum test
2

1 1
(H) x := if x then (true + false) else (true false).
2 2

As developed in [7], one can simulate this operation in vec using the test construction we just described:

    
1 1 1 1
{(H) x} := (x) true + false true false .
2 2 2 2

Note that the thunks are necessary: without thunks the term

    
1 1 1 1
(x) true + false true false
2 2 2 2

would reduce to the term


P. Arrighi et al. / Information and Computation 254 (2017) 105139 109

1
(((x) true) true + ((x) true) false + ((x) false) true + ((x) false) false),
2
which is fundamentally different from the term H we are trying to emulate.
With this procedure we can encode any matrix. If the space is of some general dimension n, instead of the basis
elements true and false we can choose for i = 1 to n the terms x1 . .xn .xi s to encode the basis of the space. We can
also take tensor products of qubits. We come back to these encodings in Section 6.

3. The type system

This section presents the core denition of the paper: the vectorial type system.

3.1. Intuitions

Before diving into the technicalities of the denition, we discuss the rationale behind the construction of the type-system.

3.1.1. Superposition of types


We want to incorporate the notion of scalars in the type system. If A is a valid type, the construction A is also a
valid type and if the terms s and t are of type A, the term s + t is of type ( + ) A. This was achieved in [4] and it
allows us to distinguish between the functions x.(1 x) and x.(2 x): the former is of type A A whereas the latter is of
type A (2 A ).
The terms true and false can be typed in the usual way with B = X ( X X ), for a xed type X . So let us consider
the term 1 (true false). Using the above addition to the type system, this term should be of type 0 B , a type which fails
2
to exhibit the fact that we have a superposition of terms. For instance, applying the Hadamard gate to this term produces
the term false of type B : the norm would then jump from 0 to 1. This time, the problem comes from the fact that the type
system does not keep track of the direction of a term.
To address this problem we must allow sums of types. For instance, provided that T = X (Y X ) and F = X
(Y Y ), we can type the term 1 (true false) with 22 (T F ), which has L 2 -norm 1, just like the type of false has
2
norm one.
At this stage the type system is able to type the term H = x.{((x) [ 1 true + 1 false]) [ 1 true 1 false]}.
2 2 2 2
Indeed, remember that the thunk construction [] is simply f .() where f is a fresh variable and that {} is ()x.x.
So whenever t has type A, [t] has type I A with I an identity type of the form Z Z , and {t} has type A whenever t
has type I A. The term H can then be typed with ((I 1 .(T + F )) (I 1 .(T F )) I T ) T , where T any
2 2
xed type.
Let us now try to type the term (H) true. This is possible by taking T to be 1 (T + F ). But then, if we want to type
2
the term (H) false, T needs to be equal to 1 (T F ). It follows that we cannot type the term (H) ( 1 true + 1 false)
2 2 2
since there is no possibility to conciliate the two constraints on T .
To address this problem, we need a forall construction in the type system, making it la System F. The term H can now
be typed with T .((I 1 (T + F )) (I 1 (T F )) I T ) T and the types T and F are updated to be
2 2
respectively X Y . X (Y X ) and X Y . X (Y Y ). The terms (H) true and (H) false can both be well-typed with
respective types 1 (T + F ) and 1 (T F ), as expected.
2 2

3.1.2. Type variables, units and general types


Because of the call-by-value strategy, variables must range over types that are not linear combination of other types,
i.e. unit types. To illustrate this necessity, consider the following example. Suppose we allow variables to have scaled types,
such as U . Then the term x.x + y could have type ( U ) U + V (with y of type V ). Let b be of type U , then
(x.x + y ) ( b) has type U + V , but then
(x.x + y ) ( b) (x.x + y ) b (b + y ) b + y ,
which is problematic since the type U + V does not reect such a superposition. Hence, the left side of an arrow will be
required to be a unit type. This is achieved by the grammar dened in Fig. 2.
Type variables, however, do not always have to be unit type. Indeed, a forall of a general type was needed in the previous
section in order to type the term H. But we need to distinguish a general type variable from a unit type variable, in order to
make sure that only unit types appear at the left of arrows. Therefore, we dene two sorts of type variables: the variables
X to be replaced with unit types, and X to be replaced with any type (we use just X when we mean either one). The type
X is a unit type whereas the type X is not.
In particular, the type T is now XY .X Y X , the type F is XY .X Y Y and the type of H is
    
1 1
X. I (T + F ) I (T F ) I X X.
2 2
Notice how the left sides of all arrows remain unit types.
110 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Types: T , R , S ::= U | T | T +R | X
Unit types: U , V , W ::= X | U T | X .U | X.U

1T T T + T ( + ) T
( T ) ( ) T T +R R+T
T + R (T + R ) T + ( R + S ) (T + R ) + S

ax t:T , x : U  t : T
, x : U  x : U 0I I
0:0T   x.t : U T


n
m
t: i X
.(U T i ) r:
j/X
j U[A
]
i =1 j =1
E

n
m
  (t) r : i j T i [ A
j / X
]
i =1 j =1


n
n
t: i U i X / F V () t: i X . U i
i =1 i =1
I E

n
n
t: i X . U i t: i U i [ A / X ]
i =1 i =1

t:T t:T r:R t:T T R


I +I
t:T t+r:T + R t: R

Fig. 2. Types and typing rules of vec . We use X when we do not want to specify if it is X or X, that is, unit variables or general variables respectively. In
T [ A / X ], if X = X , then A is a unit type, and if X = X, then A can be any type. We also may write I and I (resp. E and E ) when we need to specify
which kind of variable is being used.

3.1.3. The term 0


The term 0 will naturally have the type 0 T , for any inhabited type T (enforcing the intuition that the term 0 is
essentially a normal form of programs of the form t t).
We could also consider to add the equivalence R + 0 T R as in [4]. However, consider the following example. Let x.x
be of type U U and let t be of type T . The term x.x + t t is of type (U U ) + 0 T , that is, (U U ). Now choose
b of type U : we are allowed to say that (x.x + t t) b is of type U . This term reduces to b + (t) b (t) b. But if the type
system is reasonable enough, we should at least be able to type (t) b. However, since there is no constraints on the type T ,
this is dicult to enforce.
The problem comes from the fact that along the typing of t t, the type of t is lost in the equivalence (U U ) + 0 T
U U . The only solution is to not discard 0 T , that is, to not equate R + 0 T and R.

3.2. Formalisation

We now give a formal account of the type system: we rst describe the language of types, then present the typing rules.

3.2.1. Denition of types


Types are dened in Fig. 2 (top). They come in two avours: unit types and general types, that is, linear combinations of
types.
Unit types include all types of System F [29, Ch. 11] and intuitively they are used to type basis terms. The arrow type
admits only a unit type in its domain. This is due to the fact that the argument of a -abstraction can only be substituted by
a basis term, as discussed in Section 3.1.2. As discussed before, the type system features two sorts of variables: unit variables
X and general variables X. The former can only be substituted by a unit type whereas the latter can be substituted by any
type. We use the notation X when the type variable is unrestricted. The substitution of X by U (resp. X by S) in T is
dened as usual and is written T [U /X ] (resp. T [ S /X]). We use the notation T [ A / X ] to say: if X is a unit variable, then
A is a unit type and otherwise A is a general type. In particular, for a linear combination, the substitution is dened as

/X
follows: ( T + R )[ A / X ] = T [ A / X ] + R [ A / X ]. We also use the vectorial notation T [ A
] for T [ A 1 / X 1 ] [ An / Xn ]
if X
= A 1 , . . . , An , and also X

= X 1 , . . . , Xn and A
for X 1 . . . Xn = X 1 . . . . . Xn .
The equivalence relation on types is dened as a congruence. Notice that this equivalence makes the types into a weak
module over the scalars: they almost form a module save from the fact that there is no neutral element for the addition.
The type 0 T is not the neutral element of the addition.

We may use the summation ( ) notation without ambiguity, due to the associativity and commutativity equivalences
of +.
P. Arrighi et al. / Information and Computation 254 (2017) 105139 111

3.2.2. Typing rules


The typing rules are given also in Fig. 2 (bottom). Contexts are denoted by  , , etc. and are dened as sets {x : U , . . . },
where x is a term variable appearing only once in the set, and U is a unit type. The axiom (ax) and the arrow introduction
rule ( I ) are the usual ones. The rule (0 I ) to type the term 0 takes into account the discussion in Section 3.1.3. This rule
also ensures that the type of 0 is inhabited, discarding problematic types like 0 X . X . Any sum of typed terms can be
typed using Rule (+ I ). Similarly, any scaled typed term can be typed with ( I ). Rule () ensures that equivalent types can
be used to type the same terms. Finally, the particular form of the arrow-elimination rule ( E ) is due to the rewrite rules
in group A that distribute sums and scalars over application. The need and use of this complicated arrow elimination can
be illustrated by the following three examples.

Example 3.1. Rule ( E ) is easier to read for trivial linear combinations. It states that provided that   s : X .U S and
  t : V , if there exists some type W such that V = U [ W / X ], then since the sequent   s : V S [ W / X ] is valid, we also
have   (s) t : S [ W / X ]. Hence, the arrow elimination here performs an arrow and a forall elimination at the same time.

Example 3.2. Consider the terms b1 and b2 , of respective types U 1 and U 2 . The term b1 + b2 is of type U 1 + U 2 . We would
reasonably expect the term (x.x) (b1 + b2 ) to also be of type U 1 + U 2 . This is the case thanks to Rule ( E ). Indeed, type
the term x.x with the type X . X X and we can now apply the rule. Notice that we could not type such a term unless
we eliminate the forall together with the arrow.

Example 3.3. A slightly more involved example is the projection of a pair of elements. It is possible to encode in System F the
notion of pairs and projections: b, c = x.((x) b) c, b , c  = x.((x) b ) c , 1 = x.(x) ( y . z. y ) and 2 = x.(x) ( y . z. z).
Provided that b, b , c and c have respective types U , U  , V and V  , the type of b, c is X .(U V X ) X and the type
of b , c  is X .(U  V  X ) X . The term 1 and 2 can be typed respectively with X Y Z .(( X Y X ) Z ) Z
and X Y Z .(( X Y Y ) Z ) Z . The term (1 + 2 ) ( b, c + b , c ) is then typable of type U + U  + V + V  , thanks
to Rule ( E ). Note that this is consistent with the rewrite system, since it reduces to b + c + b + c .

3.3. Example: typing Hadamard

In this Section, we formally show how to retrieve the type that was discussed in Section 3.1.2, for the term H encoding
the Hadamard gate.
Let true = x. y .x and false = x. y . y. It is easy to check that

 true : XY .X Y X ,
 false : XY .X Y Y .
We also dene the following superpositions:

1 1
|+ = (true + false) and | = (true false).
2 2
In the same way, we dene

1
 = ((XY .X Y X ) + (XY .X Y Y )),
2
1
 = ((XY .X Y X ) (XY .X Y Y )).
2
Finally, we recall [t] = x.t, where x
/ F V (t) and {t} = (t) I . So {[t]} t. Then it is easy to check that  [|+] : I  and
 [|] : I . In order to simplify the notation, let F = ( I ) ( I ) ( I X). Then

ax
x: F x: F x : F  [|+] : I 
E
x : F  (x) [|+] : ( I ) ( I X) x : F  [|] : I 
E
x : F  (x) [|+][|] : I X
E
x : F  {(x) [|+][|]} : X
I
 x.{(x) [|+][|]} : F X
I
 x.{(x) [|+][|]} : X.(( I ) ( I ) ( I X)) X

Now we can apply Hadamard to a qubit and get the right type. Let H be the term x.{(x) [|+][|]}
112 P. Arrighi et al. / Information and Computation 254 (2017) 105139

 true : X .Y .X Y X
E
 H : X.(( I ) ( I ) ( I X)) X  true : Y .( I ) Y ( I )
E E
 H : (( I ) ( I ) ( I ))   true : ( I ) ( I ) ( I )
E
 ( H ) true : 

Yet a more interesting example is the following. Let

1
 I = ((( I ) ( I ) ( I )) + (( I ) ( I ) ( I )))
2
That is,  where the forall have been instantiated. It is easy to check that  |+ :  I . Hence,

 H : X.(( I ) ( I ) ( I X)) X  |+ :  I


E
1 1
 ( H ) |+ :  + 
2 2

And since 1  + 1  XY .X Y X , we conclude that


2 2

 ( H ) |+ : XY .X Y X .
Notice that ( H ) |+ true.

4. Subject reduction

As we will now explain, the usual formulation of subject reduction is not directly satised. We discuss the alternatives
and opt for a weakened version of subject reduction.

4.1. Principal types and subtyping alternatives

Since the terms of vec are not explicitly typed, we are bound to have sequents such as   t : T 1 and   t : T 2 with
distinct types T 1 and T 2 for the same term t. Using Rules (+ I ) and ( I ) we get the valid typing judgement   t + t :
T 1 + T 2 . Given that t + t reduces to ( + ) t, a regular subject reduction would ask for the valid sequent
  ( + ) t : T 1 + T 2 . But since in general we do not have T 1 + T 2 ( + ) T 1 ( + ) T 2 , we need to
nd a way around this.
A rst approach would be to use the notion of principal types. However, since our type system includes System F, the
usual examples for the absence of principal types apply to our settings: we cannot rely upon this method.
A second approach would be to ask for the sequent   ( + ) t : T 1 + T 2 to be valid. If we force this typing
rule into the system, it seems to solve the issue but then the type of a term becomes pretty much arbitrary: with typing
context  , the term ( + ) t would then be typed with any combination T 1 + T 2 , where + = + .
The approach we favour in this paper is via a notion of order on types. The order, denoted with , will be chosen so
that the factorisation rules make the types of terms smaller. We will ask in particular that ( + ) T 1 T 1 + T 2 and
( + ) T 2 T 1 + T 2 whenever T 1 and T 2 are types for the same term. This approach can also be extended to solve
a second pitfall coming from the rule t + 0 t. Indeed, although x : X  x + 0 : X + 0 T is well-typed for any inhabited T ,
the sequent x : X  x : X + 0 T is not valid in general. We therefore extend the ordering to also have X X + 0 T .
Notice that we are not introducing a subtyping relation with this ordering. For example, although  ( + ) x. y .x :
( + ) X .X (X X ) is valid and ( + ) X .X (X X ) X .X (X X ) + XY .X (Y Y ), the
sequent  ( + ) x. y .x : X .X (X X ) + XY .X (Y Y ) is not valid.

4.2. Weak subject reduction

We dene the (antisymmetric) ordering relation on types discussed above as the smallest reexive transitive and
congruent relation satisfying the rules:

1. ( + ) T T + T  if there are , t such that   t : T and   t : T  .


2. T T + 0. R for any type R.
3. If T R and U V , then T + S R + S, T R, U T U R and X .U X . V .

Note the fact that   t : T and   t : T  does not imply that T T  . For instance, although T 0 T + T  , we
do not have 0 T + T  T  .
Let R be any reduction rule from Fig. 1, and R a one-step reduction by rule R. A weak version of the subject reduction
theorem can be stated as follows.
P. Arrighi et al. / Information and Computation 254 (2017) 105139 113

Theorem 4.1 (Weak subject reduction). For any terms t, t , any context  and any type T , if t R t and   t : T , then:

/ Group F, then   t : T ;
1. if R
2. if R Group F, then S T such that   t : S and   t : S.

4.3. Prerequisites to the proof

The proof of Theorem 4.1 requires some machinery that we develop in this section. Omitted proofs can be found in Ap-
pendix A.1.
The following lemma gives a characterisation of types as linear combinations of unit types and general variables.

Lemma 4.2 (Characterisation of types). For any type T in G , there exist n, m N, 1 , . . . , n , 1 , . . . , m S, distinct unit types
U 1 , . . . , U n and distinct general variables X1 , . . . , Xm such that


n
m
T i U i + j Xj . 2
i =1 j =1

Our system admits weakening, as stated by the following lemma.

Lemma 4.3 (Weakening). Let t be such that x


/ F V (t). Then   t : T is derivable if and only if , x : U  t : T is derivable.

Proof. By a straightforward induction on the type derivation. 2

Lemma 4.4 (Equivalence between sums of distinct elements n (up to  )). Let U 1 , . . . , U n be a set of distinct (not equivalent) unit types,
m
and let V 1 , . . . , V m be also a set distinct unit types. If i =1 i U i j =1 j V j , then m = n and there exists a permutation p of m
such that i, i = p (i ) and U i V p (i ) . 2

Lemma 4.5 (Equivalences I ).


n   
1. U mj=1 j V j ni=1 i X .U i mj=1 j X . V j .
ni =1 i i m
2. i =1 i X .U i j =1 j V j V j , W j / V j X . W j .
3. T R T [ A / X ] R [ A / X ]. 2

For the proof of subject reduction, we use the standard strategy developed by Barendregth [10].1 It consists in dening
a relation between types of the form X . T and T . For our vectorial type system, we take into account linear combinations
of types

Denition 4.6. For any types T , R, any context  and any term t such that

t:T
..
.
t:R

1. if X
/ F V (), write T tX , R if either
 
T ni=1 i U i and R ni= 1 i X .U i , or
T i =1 i X .U i and R ni=1 i U i [ A / X ].
n

2. if V is a set of type variables such that V F V () = , we dene tV , inductively by


If X V and T tX , R, then T t{ X }, R.
If V1 , V2 V , T tV1 , R and R tV2 , S, then T tV1 V2 , S.
If T R, then T tV , R.

1
Note that Barendregths original proof contains a mistake [20]. We use the corrected proof proposed in [4].
114 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Example 4.7. Let the following be a valid derivation.


n
t:T T i U i
i =1


n
t: i U i X
/ F V ()
i =1
I

n
t: i X .U i
i =1
E

n
t: i U i [ V /X ] Y
/ F V ()
i =1
I

n
n
t: i Y.U i [ V /X ] i Y.U i [ V /X ] R
i =1 i =1

t:R

Then T t{X ,Y}, R.

Note that this relation is stable under reduction in the following way:

Lemma 4.8 (-stability). If T tV , R, t r and   r : T , then T rV , R. 2

The following lemma states that if two arrow types are ordered, then they are equivalent up to some substitutions.

Lemma 4.9 (Arrows comparison). If V R tV , X


/Y
], with Y


.(U T ), then U T ( V R )[ A / F V (). 2

Before proving Theorem 4.1, we need to prove some basic properties of the system.

Lemma 4.10 (Scalars). For any context  , term t, type T and scalar , if   t : T , then there exists a type R such that T R
and   t : R. Moreover, if the minimum size of the derivation of   t : T is s, then if T = R, the minimum size of the derivation
of   t : R is at most s 1, in other case, its minimum size is at most s 2. 2

The following lemma shows that the type for 0 is always 0 T .

Lemma 4.11 (Type for zero). Let t = 0 or t = 0, then   t : T implies T 0 R. 2

Lemma 4.12 (Sums). If   t + r : S, then S T + R with   t : T and   r : R. Moreover, if the size of the derivation of   t + r : S
is s, then if S = T + R, the minimum sizes of the derivations of   t : T and   r : R are at most s 1, and if S = T + R, the minimum
sizes of these derivations are at most s 2. 2

n 
Lemma 4.13 (Applications). If   (t) r : T , then   t : i =1 i X
.(U T i ) and   r : mj=1 j U [ A
j / X
] where
n m
i =1 j =1 i j T i [ A
j / X
] (Vt),r T for some V . 2

.t T for some V .
Lemma 4.14 (Abstractions). If   x.t : T , then , x : U  t : R where U R Vx, 2

A basis term can always be given a unit type.

Lemma 4.15 (Basis terms). For any context  , type T and basis term b, if   b : T then there exists a unit type U such that T U . 2

The nal stone for the proof of Theorem 4.1 is a lemma relating well-typed terms and substitution.

Lemma 4.16 (Substitution lemma). For any term t, basis term b, term variable x, context  , types T , U , type variable X and type A,
where A is a unit type if X is a unit variables, otherwise A is a general type, we have,
P. Arrighi et al. / Information and Computation 254 (2017) 105139 115

1. if   t : T , then [ A / X ]  t : T [ A / X ];
2. if , x : U  t : T ,   b : U then   t[b/x] : T . 2

The proof of subject reduction (Theorem 4.1), follows by induction using the previous dened lemmas. It can be found in
full details in Appendix A.2.

5. Strong normalisation

For proving strong normalisation of well-typed terms, we use reducibility candidates, a well-known method described
for example in [29, Ch. 14]. The technique is adapted to linear combinations of terms. Omitted proofs in this section can be
found in Appendix B.
A neutral term is a term that is not a -abstraction and that does reduce to something. The set of closed neutral terms is
denoted with N . We write
0 for the set of closed terms and SN 0 for the set of closed, strongly normalising terms. If t is
any term, Red(t) is the set of all terms t such that t t . It is naturally extended to sets of terms. We say that a set S of
closed terms is a reducibility candidate, denoted with S RC if the following conditions are veried:

RC1 Strong normalisation: S SN0 .


RC2 Stability under reduction: t S implies Red(t) S.
RC3 Stability under neutral expansion: If t N and Red(t) S then t S.
RC4 The common inhabitant: 0 S.

We dene the notion of algebraic context over a list of terms


t, with the following grammar:

F (
t), G (
t) ::= ti | F (
t) + G (
t) | F (
t) | 0,
where ti is the i-th element of the list t. Given a set of terms S = {si }i , we write F ( S ) for the set of terms of the form F (
s)
when F spans over algebraic contexts.
We introduce two conditions on contexts, which will be handy to dene some of the operations on candidates:

CC1 If for some F , F (


s) S then i , si S.
CC2 If for all i, si S and F is an algebraic context, then F (
s) S.

We then dene the following operations on reducibility candidates.

1. Let A and B be in RC. A B is the closure under RC3 and RC4 of the set of t
0 such that (t) 0 B and such that
for all base terms b A, (t) b B. 
2. If {Ai }i is a family of reducibility candidates, i Ai is the closure under CC1 , CC2 , RC2 and RC3 of the set i Ai .

1 
Remark 5.1. Notice that i =1 A = A. Indeed, 1i =1 A is in particular the closure over CC2 , meaning that all linear combina-
1
tions of terms of A belongs to i =1 A, whereas they might not be in A.

Remark 5.2. In the denition of algebraic contexts, a term t i might appear at several positions in the context. However, for
any given algebraic context F (
t) there is always a linear algebraic context F l (
t ) with a suitably modied list of terms
t
such that F (
t) and F l (
t ) are the same terms. For example, choose the following (arguably very verbose) construction: if F
contains m placeholders, and if
t is of size n, let
t be the list t1 . . . t1 , t2 . . . t2 , . . . , tn . . . tn with each time m repetitions of
each ti . Then construct F l (t
 ) exactly as F except that for each ith placeholder we pick the ith copy of the corresponding
term in F . By construction, each element in the list
t is used at most once, and the term F l (t
 ) is the same as the term F (
t).

Lemma 5.3. If A, B and all the Ai s are in RC, then so are A B, i Ai and i Ai . 2

A single type valuation is a partial function from type variables to reducibility candidates, that we dene as a sequence
of comma-separated mappings, with denoting the empty valuation: := | , X  A. Type variables are interpreted
using pairs of single type valuations, that we simply call valuations, with common domain: = (+ , ) with |+ | = | |.
Given a valuation = (+ , ), the complementary valuation is the pair ( , + ). We write ( X + , X )  ( A + , A ) for the
valuation ( X +  A + , X  A ). A valuation is called valid if for all X , ( X ) + ( X ).
From now on, we will consider the following grammar

U, V, W ::= U | X.

That is, we will use U, V, W for unit and X-kind of variables.


To dene the interpretation of a type T , we use the following result.
116 P. Arrighi et al. / Information and Computation 254 (2017) 105139

n
Lemma 5.4. Any type T , has a unique (up to ) canonical decomposition T i =1 i Ui such that for all l, k, Ul  Uk . 2

The interpretation J T K of a type T in a valuation = (+ , ) dened for each free type variable of T is given by:
J X K = + ( X ),
JU T K = JU K J T K ,
U K = ARC JU K ,( X + , X )(A,A) ,
J X .
If T i i Ui is the canonical decomposition of T and T  U
J T K = i JUi K
From Lemma 5.3, the interpretation of any type is a reducibility candidate.
Reducibility candidates deal with closed terms, whereas proving the adequacy lemma by induction requires the use
of open terms with some assumptions on their free variables, that will be guaranteed by a context. Therefore we use
substitutions to close terms:

:= | (x  b; ) ,
then t = t and txb; = t[b/x] . All the substitutions ends by , hence we omit it when not necessary.
Given a context  , we say that a substitution satises  for the valuation (notation: JK ) when (x : U ) 
implies x JU K (note the change in polarity). A typing judgement   t : T , is said to be valid (notation  |= t : T ) if

in case T U, then for every


valuation , and for every substitution JK , we have t JUK .
in other case, that is, T ni=1 i Ui with n > 1, such that for all i , j, Ui  U j (notice that by Lemma 5.4 such a
n , and set of valuations {i }n , where i acts on F V (U i ) \ F V (),
decomposition always exists), then for every valuation
and for every substitution JK , we have t i =1 JUi K ,i .

Lemma 5.5. For any types T and A, variable X and valuation , we have J T [ A / X ]K = J T K ,( X + , X )(J A K ,J A K ) and J T [ A / X ]K =
J T K ,( X , X + )(J A K ,J A K ) . 2

The proof of the Adequacy Lemma as well as the machinery of needed auxiliary lemmas can be found in Appendix B.2.

Lemma 5.6 (Adequacy Lemma). Every derivable typing judgement is valid: For every valid sequent   t : T , we have  |= t : T . 2

Theorem 5.7 (Strong normalisation). If   t : T is a valid sequent, then t is strongly normalising.

Proof. If  is the list (xi : U i )i , the sequent  x1 . . . xn .t : U 1 ( (U n T ) ) is derivable. Using Lemma 5.6, we
deduce that for any valuation and any substitution JK , we have x1 . . . xn .t J T K . By construction, does
nothing on t: t = t. Since J T K is a reducibility candidate, x1 . . . xn .t is strongly normalising and hence t is strongly
normalising. 2

6. Interpretation of typing judgements

6.1. The general case

In the general case the calculus can represent innite-dimensional linear operators such as x.x, x. y . y , x. f .( f ) x,
. . . and their applications. Even
 for such general terms t, the vectorial type system provides much information about the
superposition of basis terms i i bi to which t reduces, as explained in Theorem 6.1. How much information is brought
by the type system in the nitary case is the topic of Section 6.2.

n
Theorem 6.1 (Characterisation of terms). Let T be a generic type with canonical decomposition
   i =1 i .Ui , in the sense 
of Lemma 5.4.
If  t : T , then t i =1
n mi mi 0
j =1
i j b i j , where for all i,  b i j : U i and j =1
i j = i , and with the convention that j =1 i j = 0
0
and
j =1 i j b ij = 0. 2

The detailed proof of the previous theorem can be found in Appendix C.

6.2. The nitary case: expressing matrices and vectors

In what we call the nitary case, we show how to encode nite-dimensional linear operators, i.e. matrices, together
with their applications to vectors, as well as matrix and tensor products. Theorem 6.2 shows that we can encode matrices,
vectors and operations upon them, and the type system will provide the result of such operations.
P. Arrighi et al. / Information and Computation 254 (2017) 105139 117

6.2.1. In 2 dimensions
In this section we come back to the motivating example introducing the type system and we show how vec handles the
Hadamard gate, and how to encode matrices and vectors.
With an empty typing context, the booleans true = x. y .x and false = x. y . y can be respectively typed with the
types T = XY .X (Y X ) and F = XY .X (Y Y ). The superposition has the following type  true + false :
T + F . (Note that it can also be typed with ( + ) X .X X X .)
The linear map U sending true to a true + b false and false to c true + d false, that is

true  a true + b false,


false  c true + d false
is written as

U = x.{((x)[a true + b false])[c true + d false]}.


The following sequent is valid:

 U : X.(( I (a T + b F )) ( I (c T + d F )) I X) X.
This is consistent with the discussion in the introduction: the Hadamard gate is the case a = b = c = 1 and d = 1 . One
2 2
can check that with an empty typing context, (U) true is well typed of type a T + b F , as expected since it reduces to
a true + b false.
The term (H) 1 (true + false) is well-typed of type T + 0 F . Since the term reduces to true, this is consistent with
2
the subject reduction: we indeed have T T + 0 F .
But we can do more than typing 2-dimensional vectors 2 2-matrices: using the same technique we can encode vectors
and matrices of any size.

6.2.2. Vectors in n dimensions


The 2-dimensional space is represented by the span of x1 x2 .x1 and x1 x2 .x2 : the n-dimensional space is simply repre-
sented by the span of all the x1 xn .xi , for i = 1 n. As for the two dimensional case where

 1 x1 x2 .x1 + 2 x1 x2 .x2 : 1 X1 X2 .X1 + 2 X1 X2 .X2 ,


an n-dimensional vector is typed with


n
n
 i x1 xn .xi : i X1 Xn .Xi .
i =1 i =1

We use the notations

eni = x1 xn .xi , Eni = X1 Xn .Xi


and we write

u }term 1 en1
1 +
w ..   n
v . ~ = = i eni ,
+ i =1
n n
n enn

u }type 1 En1
1 +
w ..   n
v . ~ = = i Eni .
+ i =1
n n
n Enn

6.2.3. n m matrices
Once the representation of vectors is chosen, it is easy to generalize the representation of 2 2 matrices to the n m
case. Suppose that the matrix U is of the form

11 1m

. .. ,
U = .. .
n1 nm
118 P. Arrighi et al. / Information and Computation 254 (2017) 105139

then its representation is



11 en1 1m en1


+ +

JU Knterm
m = x. (x)

+ +



n1 enn nm enn
and its type is

11 En1
1m En1
+ +
type
JU Knm = X. [ X ] X,
+ +
n1 En
n
nm En
n

that is, an almost direct encoding of the matrix U .

We also use the shortcut notation

mat(t1 , . . . , tn ) = x.(. . . ((x) [t1 ]) . . .) [tn ]

6.2.4. Useful constructions


In this section, we describe a few terms representing constructions that will be used later on.

Projections. The rst useful family of terms are the projections, sending a vector to its ith coordinate:

1 0
.. ..
. .

i  i .

. .
.. ..
n 0
Using the matrix representation, the term projecting the ith coordinate of a vector of size n is

ith position
pni = mat(0, , 0, eni , 0, , 0).

We can easily verify that


u }type
0 0 0
w .. . . .. 
w. . .
w 
 pni : w
w0 1 0 
w. .. .. 
v. . . .~
0 0 0 nn

and that
 n 

(pni0 ) n
i ei i 0 eni0 .
i =1

Vectors and diagonal matrices. Using the projections dened in the previous section, it is possible to encode the map sending
a vector of size n to the corresponding n n matrix:

1 1 0
.. ..
.  .
n 0 n
with the term

diagn = b.mat((pn1 ) {b}, . . . , (pnn ) {b})


P. Arrighi et al. / Information and Computation 254 (2017) 105139 119

of type
u }type u }type
1 1 0
w  w 
 diagn : v ... ~ v ..
. ~ .
n n
0 n nn

It is easy to check that


 n

(diag )n
i eni  mat(1 en1 , . . . , n enn )
i =1

Extracting a column vector out of a matrix. Another construction that is worth exhibiting is the operation

11 1n 1i
.. .. .
. .  .. .
m1 mn mi
It is simply dened by multiplying the input matrix with the ith base column vector:

colni = x.(x) eni


and one can easily check that this term has type
u }type u }type
11 1n 1i
w ..  w 
 colni : v ... . ~ v ... ~ .
m1 mn mn
mi m

Note that the same term colni can be typed with several values of m.

6.2.5. A language of matrices and vectors


In this section we formalize what was informally presented in the previous sections: the fact that one can encode simple
matrix and vector operations in vec , and the fact that the type system serves as a witness for the result of the encoded
operation.
We dene the language Mat of matrices and vectors with the grammar

M , N ::= | M N | ( M ) N
u , v ::= | u v | ( M ) u ,
where ranges over the set matrices and over the set of (column) vectors. Terms are implicitly typed: types of matrices
are (m, n) where m and n ranges over positive integers, while types of vectors are simply integers. Typing rules are the
following.

Cmn M : (m, n) N : (m , n ) M : (m, n ) N : (n , n)


: (m, n) M N : (mm , nn ) ( M ) N : (m, n)
C m
u :m v :n M : (m, n) u :m
:m u v : mn (M ) u : n
The operational semantics of this language is the natural interpretation of the terms as matrices and vectors. If M computes
the matrix , we write M . Similarly, if u computes the vector , we write u .
Following what we already said, matrices and vectors can be interpreted as types and terms in vec . The map JKterm
sends terms of Mat to terms of vec and the map JKtype sends matrices and vectors to types of vec .

Vectors and matrices are dened as in Sections 6.2.2 and 6.2.3.


As we already discussed, the matrix-vector multiplication is simply the application of terms in vec :
J( M ) u Kterm = (J M Kterm ) Ju Kterm
The matrix multiplication is performed by rst extracting the column vectors, then performing the matrix-vector mul-
tiplication: this gives a column of the nal matrix. We conclude by recomposing the nal matrix column-wise.
That is done with the term
app = xy .mat((x) ((colm m
1 ) y ), . . . , (x) ((coln ) y ))

and its type is


120 P. Arrighi et al. / Information and Computation 254 (2017) 105139

u }type u}type
11 1n 11 1k s {type
w . ..  w . ..  !n "
v .. . ~ v .
. . ~ i =1 ji il j =1 ...m
l=1...k mk
m1 mn mn n1 nk nk

Hence,

J( M ) N Kterm = ((app) J M Kterm ) J N Kterm


For dening the tensor of vectors, we need to multiply the coecients of the vectors:

1 1 1
.. ..
1 . .

1 m
1 1 m
.. .. . .
. . = .. = .. .

n 1
m n 1
.. ..
n . .
m n m
We perform this operation in several steps: First, we map the two vectors (i )i and ( j ) j into matrices of size mn mn:




1 1
..
. m ..
.
1 1

1 m
. .. . ..
.
.
 . and .
.
 . n.


n
1
n .. m ..
. m .
n m

These two operations can be represented as terms of vec respectively as follows:



(pn1 ){b},
1 ){b },
(pm

.. ..

. m .
(pn ){b},
(pm ){b},

1 m


m1 = b. mat
n,m .. m,n
= ..
. and m2 b . mat . n.
n m
(pn ){b},
(p1 ){b},


.. m ..
.
.

(pnn ){b} m ){b }
(pm

It is now enough to multiply these two matrices together to retrieve the diagonal:

1 1 1

1 1 .. ..
.
..
. ..
. .
1 1 m
1 m
.. .. .. ..
. . . = .

n 1 1 n 1
.. ..
. . . .
n m .. ..
1 n m
and this can be implemented through matrix-vector multiplication:
  mn 

tens n,m
= bc .((mn1,m ) b) ((mm ,n
2 ) c) eni .
i =1

Hence, if u : n and v : m, we have

Ju v Kterm = ((tensn,m ) Ju Kterm ) J v Kterm


P. Arrighi et al. / Information and Computation 254 (2017) 105139 121

The tensor of matrices is done column by column:



11 . . . 1n 11 . . . 1m
.. .. .. .. =
. . . .
n 1 . . . n n m 1 . . . m m

11 11 1n 1m
.. .. . .
. . . . . .. ..
n 1 m 1 n n m m
If M be a matrix of size m m and N a matrix of size n n . Then M N has size m n, and it can be implemented
as

Tensm,n =
bc .mat(((tensm,n ) (colm n
1 ) b ) (col1 ) c , ((tens
m,n
) (colm n
n ) b ) (colm ) c )

Hence, if M : (m, m ) and N : (n, n ), we have

J M N Kterm = ((Tensm,n ) J M Kterm ) J N Kterm

Theorem 6.2. The denotation of Mat as terms and types of vec are sound in the following sense.

M implies  J M Kterm : J Ktype ,


u implies  Ju Kterm : J Ktype .

Proof. The proof is a straightforward structural induction on M and u. 2

6.3. vec and quantum computation

In quantum computation, data is encoded on normalised vectors in Hilbert spaces. For our purpose, their interesting
property is to be modules over the ring of complex numbers. The smallest non-trivial such space is the space of qubits.
The space of qubits is the two-dimensional vector space C2 , together with a chosen orthonormal basis {|0, |1}. A quantum
bit (or qubit) is a normalised vector |0 + |1, where | |2 + ||2 = 1. In quantum computation, the operations on qubits
that are usually considered are the quantum gates, i.e. a chosen set of unitary operations. For our purpose, their interesting
property is to be linear.
The fact that one can encode quantum circuits in vec is a corollary of Theorem 6.2. Indeed, a quantum circuit can
be regarded as a sequence of multiplications and tensors of matrices. The language of term can faithfully represent those,
where as the type system can serve as an abstract interpretation of the actual unitary map computed by the circuit.
We believe that this tool is a rst step towards lifting the quantumness of algebraic -calculi to the level of a type based
analysis. It could also be a step towards a quantum theoretical logic coming readily with a CurryHoward isomorphism.
The logic we are sketching merges intuitionistic logic and vectorial structure, which makes it intriguing.
The next step in the study of the quantumness of the linear algebraic -calculus is the exploration of the notion of
orthogonality between terms, and the validation of this notion by means of a compilation into quantum circuits. The work
of [41] shows that it is worthwhile pursuing in this direction.

6.4. vec and other calculi

No direct connection seems to exists between vec and intersection and union types [9,38]. However, there is an ongoing
project of a new type system based on intersections, which may take some of the ideas from the Vectorial lambda calculus.
Indeed, the sum resembles as a non-idempotent intersection, with some extra quantitative information. In [17] a type
system with non-idempotent intersection has been used to compute a bound on the normalisation time, and in [11,33]
to provide new characterisations on strongly normalising terms. In any case, the Scalar type system [4] seems more close
to these results: only scalars are considered and so t + t has type 2 T if t has type T , hence the difference with the
non-idempotent intersection type is that the scalars are not just natural numbers, but members of a given ring. The Vectorial
lambda calculus goes beyond, as it not only have the quantitative information, thought the scalars, but also allows to
intersect different types. On the other hand, the interpretation of linear combinations we give in this paper is more in line
with a union than an intersection, which may be a interesting starting question to follow.
A different path to follow has been started at [22], where a non-idempotent intersection (simply) type has been con-
sidered, with no scalars, and where all the isomorphisms on such a system are made explicit with an equivalence relation.
One derivative of such a work has been a characterisation of superpositions and projective measurement [23]. A natural
objective to pursue is to add scalars to it, by somehow merging it with vec .
122 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Acknowledgments

We would like to thank Gilles Dowek and Barbara Petit for enlightening discussions.

Appendix A. Detailed proofs of lemmas and theorems in Section 4

A.1. Lemmas from Section 4.3

Lemma 4.2 (Characterisation of types). For any type T in G , there exist n, m N, 1 , . . . , n , 1 , . . . , m S, distinct unit types
U 1 , . . . , U n and distinct general variables X1 , . . . , Xm such that


n
m
T i U i + j Xj .
i =1 j =1

Proof. Structural induction on T .


1
Let T be a unit type, then take = = 1, n = 1 and m  = 0, and so T  i =1 1 U = 1 U . 
Let T = T  , then by the induction hypothesis T  ni=1 i U i + mj=1 j X j , so T = T  ( ni=1 i U i +
m n m
j =1 j X j ) i =1 ( i ) U i + j =1 ( j ) X j .
    
Let T = R + S, then by the induction hypothesis R ni=1 i U i + mj=1 j X j and S ni =1 i U i + mj =1 j X j , so
n n m m 
T = R + S i =1 i U i + i =1 i U i + j =1 j X j +
 
j =1 j X j . If the U i and the U i are all different each other,
  
we have nished, in other case, if U k = U h , notice that k U k + h U h = (k + h ) U k . 

Let T = X, then take = = 1, m = 1 and n = 0, and so T 1j=1 1 X = 1 X. 2


= U 1 , . . . , U n be a list of n unit types. If U is a unit type,
Denition A.1. Let F be an algebraic context with n holes. Let U
we write U for the set of unit types equivalent to U :

U := { V | V is unit and V U }.

) associated with the context F and the unit types U
is partial map from the set S = {U } to scalars.
The context vector v F (U

) := v F (U
), v F +G (U
) := v F (U
) + v G (U
), and nally v [i ] (U
) := {U i  1}. The
It is inductively dened as follows: v F (U
sum is dened on these partial map as follows:


) + g (U
) if both are dened
f (U

f (U )
) is dened but not g (U
)
if f (U
( f + g )(U
) =


)
g (U )
if g (U ) is dened but not f (U


is not dened if neither f (U ) nor g (U ) is dened.


Scalar multiplication is dened as follows:

( f (U
))
) is dened
if f (U
( f )(U
) =
) is not dened.
is not dened if f (U


be a list of n unit types, and V
be a list of m
Lemma A.2. Let F and G be two algebraic contexts with respectively n and m holes. Let U

) G ( V
) implies v F (U
) = v G ( V
).
unit types. Then F (U


) F ( V
) essentially consists in a sequence of the elementary rules (or congruence thereof) in
Proof. The derivation of F (U
Fig. 2 composed with transitivity:


) = W 1 W 2 W k = G ( V
).
F (U
We prove the result by induction on k.

Case k = 1. Then F (U
) is syntactically equal to G ( V
): we are done.
Suppose that the result is true for sequences of size k, and let

) = W 1 W 2 W k W k+1 = G ( V
).
F (U

) W 2 : it is an elementary step from Fig. 2. By structural induction on the
Let us concentrate on the rst step F (U

) W 2 (which only uses congruence and elementary steps, and not transitivity), we can show that W 2
proof of F (U
P. Arrighi et al. / Information and Computation 254 (2017) 105139 123

is of the form F  (U
 ) where v F (U
) = v F  (U
 ). We are now in power of applying the induction hypothesis, because the
sequence of elementary rewrites from F  (U
 ) to G ( V
) is of size k. Therefore v F  (U
 ) = v G ( V
). We can then conclude

) = v G ( V
).
that v F (U

This conclude the proof of the lemma. 2

Lemma 4.4 (Equivalence between sums of distinct elements n (up to )). Let
m
U 1 , . . . , U n be a set of distinct (not equivalent) unit types,
and let V 1 , . . . , V m be also a set distinct unit types. If i =1 i U i j =1 j V j , then m = n and there exists a permutation p of n
such that i, i = p (i ) and U i V p (i ) .

n m
Proof. Let S = i =1 i U i and T = j =1 j V j . Both S and T can be respectively written as F (U
) and G ( V
). Using

) = v G ( V
). Since all U i s are pairwise non-equivalent, the U i s are pairwise distinct.
Lemma A.2, we conclude that v F (U

) = {U i  i | i = 1 . . . n}.
v F (U
Similarly, the V j s are pairwise disjoint, and


) = { V j  j | i = 1 . . . m}.
v G (G
We obtain the desired result because these two partial maps are supposed to be equal. Indeed, this implies:

m = n because the domains are equal (so they should have the same size).
Again using the fact that the domains are equal, the sets {U i } and { V j } are equal: this means there exists a permutation
p of n such that i, U i = V p (i ) , meaning U i V p (i ) .
Because the partial maps are equal, the images of a given element U i = V p (i ) under v F and v G are in fact the same:
we therefore have i = p (i ) .

And this closes the proof of the lemma. 2

Lemma 4.5 (Equivalences I ). Let U 1 , . . . , U n be a set of distinct (not equivalent) unit types and let V 1 , . . . , V n be also a set of distinct
unit types.
n   
1. U mj=1 j V j ni=1 i X .U i mj=1 j X . V j .
ni =1 i i m
2. i =1 i X .U i j =1 j V j V j , W j / V j X . W j .
3. T R T [ A / X ] R [ A / X ].

Proof. Item (1) From Lemma 4.4, m = n, and without loss of generality, for all i, i = i and U i = V i in the left-to-right
direction, X .U i = X . V i in the right-to-left direction. In both cases we easily conclude.
Item (2) is similar.
Item (3) is a straightforward induction on the equivalence T R. 2

Lemma 4.8 ( -stability). If T tV , R, t r and   r : T , then T rV , R.

Proof. It suces to show this for tX , , with X V . Observe that since T tX , R, then X
/ F V (). We only have to prove
that   r : R is derivable from   r : T . We proceed now by cases:
 
T ni=1 i U i and R ni=
1 i X .U i , then using rules I and , we can deduce   r : R.
T i =1 i X .U and R ni=1 i U i [ A / X ], then using rules E and , we can deduce   r : R. 2
n

Lemma 4.9 (Arrows comparison). If V R tV , X


/Y
], with Y


.(U T ), then U T ( V R )[ A / F V ().

Proof. Let ( ) be a map from types to types dened as follows,

X = X (U T ) = U T ( X . T ) = T
( T ) = T ( T + R ) = T + R
We need three intermediate results:

1. If T R, then T R .
2. For any types U , A, there exists B such that (U [ A / X ]) = U [ B / X ].

such that if V t X
3. For any types V , U , there exists A
/X

.U , then U V [ A
].
V ,
124 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Proofs.

1. Induction on the equivalence rules. We only give the basic cases since the inductive step, given by the context where
the equivalence is applied, is trivial.
(1 T ) = 1 T T .
( ( T )) = ( T ) ( ) T = (( ) T ) .
( T + R ) = T + R ( T + R ) = ( ( T + R )) .
( T + T ) = T + T ( + ) T = (( + ) T ) .
( T + R ) = T + R R + T = ( R + T ) .
( T + ( R + S )) = T + ( R + S ) ( T + R ) + S = (( T + R ) + S ) .
2. Structural induction on U .
U = X . Then (X [ V /X ]) = V = X [ V /X ] = X [ V /X ].
U = Y . Then (Y [ A / X ]) = Y = Y [ A / X ].
U = V T . Then (( V T )[ A / X ]) = ( V [ A / X ] T [ A / X ]) = V [ A / X ] T [ A / X ] = ( V T )[ A / X ] = ( V
T ) [ A / X ].
U = Y . V . Then ((Y . V )[ A / X ]) = (Y . V [ A / X ]) = ( V [ A / X ]) , which by the induction hypothesis is equivalent to
V [ B / X ] = (Y . V ) [ B / X ].
3. It suces to show this for V tX , X
.U . Cases:
X
.U Y . V , then notice that ( X
.U ) (1) (Y . V ) = V .
V Y . W and X
.U W [ A / X ], then

.U ) (1) ( W [ A / X ]) (2) W [ B / X ] = (Y . W ) [ B / X ] (1) V [ B / X ].
( X

/X
Proof of the lemma. U T (U T ) , by the intermediate result 3, this is equivalent to ( V R ) [ A
] = (V

/X
R )[ A
]. 2

Lemma 4.10 (Scalars). For any context  , term t, type T and scalar , if   t : T , then there exists a type R such that T R
and   t : R. Moreover, if the minimum size of the derivation of   t : T is s, then if T = R, the minimum size of the derivation
of   t : R is at most s 1, in other case, its minimum size is at most s 2.

Proof. We proceed by induction on the typing derivation.


n m

n By the induction hypothesis i =1 i U i R, and by Lemma 4.2, R j =1 j V j +
h m n
t: i U i k=1 k Xk . So it is easy to see that h = 0 and so R j =1 j V j . Hence i =1 i U i
m n m m
j =1 j V j . Then by Lemma 4.5, i =1 i X .U i j =1 j X . V j j =1 j
i =1
I

n X . V j . In addition, by the induction hypothesis,   t : R with a derivation m of size s 3
t: i X . U i (or s 2 if n = 1), so by rules I and (not needed if n = 1),   t : j =1 j X . V j in
i =1 size s 2 (or s 1 in the case n = 1).
n m
By the induction hypothesis i =1 i X .U i R, and by Lemma 4.2, R j =1 j
h  m

n Vj + k=1 k Xk . So it is easy to see that h = 0 and so R j =1 j V j . Hence
n m
t: i X . U i
i =1 i X . U i j =1 V . Then by Lemma 4.5, for each V j , there exists W j
nj j
m
i =1 such that V j X . W j , so X . U X . W . Then by the same
E n i = 1 i i j =1
jm j

n
lemma, i =1 i U i [ A / X ] m
j =1 j W j [ A / X ] j =1 j W j [ A / X ] . In addition,
t: i U i [ A / X ] by the induction hypothesis,   t : R with a derivation
i =1
m of size s 3 (or s 2 if n = 1),
so by rules E and (not needed if n = 1),   t : j =1 j W j [ A / X ] in size s 2 (or
s 1 in the case n = 1).

t:T
I Trivial case.
t:T

By the induction hypothesis T S, and   t : S. Notice that R T S. If T = S,


t:T T R then it is derived with a minimum size of at most s 2. If T = R, then the minimum size
remains because the last rule is redundant. In other case, the sequent can be derived with
t: R
minimum size at most s 1. 2

Lemma 4.11 (Type for zero ). Let t = 0 or t = 0, then   t : T implies T 0 R.


P. Arrighi et al. / Information and Computation 254 (2017) 105139 125

Proof. We proceed by induction on the typing derivation.

0:T I and t:T


0I Trivial cases
0:0T 0:0T


n -rules ( I and E ) have both the same structure as shown on the left. In both cases, by the
n m h
t: i U i induction hypothesis i =1 i U i 0 R, and by Lemma 4.2, R j =1 j W j + k=1 k Xk . It is
n m m
i =1 easy to check that h = 0, so i =1 i U i 0 j =1 j W j j =1 0 W j . Hence, using the same
 

n
-rule, we can derive   t : mj=1 0 W j , and by Lemma 4.5 we can ensure that ni=1 i V i
t: i V i m 
0 j =1 W j .
i =1

t:T T R
By the induction hypothesis R T 0 S. 2
t:R

Lemma 4.12 (Sums). If   t + r : S, then S T + R with   t : T and   r : R. Moreover, if the size of the derivation of   t + r : S
is s, then if S = T + R, the minimum sizes of the derivations of   t : T and   r : R are at most s 1, and if S = T + R, the minimum
sizes of these derivations are at most s 2.

Proof. We proceed by induction on the typing derivation.


n
t+r: i U i Rules I and E have both the same structure as shown
i =1
n on the left. In any case, by the induc-
tion hypothesis   t : T and   r : R with T + R i =1 i U i , and derivations of minimum

n
size at most s 2 if the equality is true, or s 3 if these types are not equal.
t+r: i V i
i =1

In the second case (when the types are not equal), there exists N , M {1, . . . , n} with N M = {1, . . . , n} such that


T i U i + i U i and
iN \M iN M


R i U i + i U i
iM \N iN M


where i N M, i + i = i . Therefore, using (if needed) and the same -rule, we get   t : i N \ M i V i +
 
  
i N M i V i and   r : i M \ N i V i + i N M i V i , with derivations of minimum size at most s 1.

  t + r : S S S By the induction hypothesis, S S  T + R and we can derive   t : T and  


r : R with a minimum size of at most s 2.
t+r: S

t:T r:R
+I q This is the trivial case. 2
t+r:T + R

n m
Lemma 4.13 (Applications). If   (t) r : T , then   t : i X
.(U T i ) and   r :
j/X
U[A
] where
i =1 j =1 j
n m


(t)r
i =1 j =1 i j T i [ A j / X ] V , T for some V .
126 P. Arrighi et al. / Information and Computation 254 (2017) 105139

Proof. We proceed by induction on the typing derivation.


o
  (t) r : k V k Rules I and E have both the same structure as shown on the mleft. In any case, by
the induction hypothesis   t :
n

k=1 i =1 i X .(U T i ),   r : j =1 j U [ A j / X ] and


n m o o

o
i =1 j =1 i j T i [ A
j / X
] (Vt),r k=1 k V k (Vt),r k=1 k W k .
  (t) r : k W k
k=1

n m
By the induction hypothesis   t : i X
.(U T i ),   r :
j/X
U[A
] and
  (t) r : S SR i =1 j =1 j
n m
  (t) r : R i j
j/X
Ti[A
] (t)r S R.
i =1 j =1 V ,


n
m
t: i X
.(U T i )   r :
j/X
j U[A
]
i =1 j =1
E This is the trivial case. 2

n
m
  (t) r : i j T i [ A
j / X
]
i =1 j =1

.t T for some V .
Lemma 4.14 (Abstractions). If   x.t : T , then , x : U  t : R where U R Vx,

Proof. We proceed by induction on the typing derivation.


n
  x.t : i U i
Rules I and E have both the same structure as shown on the left. In any case, by the induc-
.t n U x.t n V .
i =1
tion hypothesis , x : U  t : R, where U R Vx, i =1 i i V , i =1 i i

n
  x.t : i V i
i =1

  x.t : R RT By the induction hypothesis , x : U  t : S where U S Vx,


.t R

  x.t : T T.

, x : U  t : T 2
I This is the trivial case.
  x.t : U T

Lemma 4.15 (Basis terms). For any context  , type T and basis term b, if   b : T then there exists a unit type U such that T U .

Proof. By induction on the typing derivation.


n
b: i U i Rules I and E 
have both the same
n structure
n
as shown on the left. In any case, by the induction
i =1 hypothesis U i =1 i U i bV , i =1 i V i , then by a straightforward case analysis, we can
n

n
check that i =1 i V i U  .
b: i V i
i =1
P. Arrighi et al. / Information and Computation 254 (2017) 105139 127

b: R RT
By the induction hypothesis U R T .
b:T

ax , x : U  t : T 2
, x : U  x : U
or I These two are the trivial cases.
  x.t : U T

Lemma 4.16 (Substitution lemma). For any term t, basis term b, term variable x, context  , types T , U , type variable X and type A,
where A is a unit type if X is a unit variables, otherwise A is a general type, we have,

1. if   t : T , then [ A / X ]  t : T [ A / X ];
2. if , x : U  t : T ,   b : U then   t[b/x] : T .

Proof.

1. Induction on the typing derivation.

ax Notice that [ A / X ], x : U [ A / X ]  x : U [ A / X ] can also be derived with the same rule.


, x : U  x : U

t:T By the induction hypothesis [ A / X ]  t : T [ A / X ], so by rule 0 I , [ A / X ]  0 : 0 T [ A / X ] = (0


0I T )[ A / X ].
0:0T

, x : U  t : T By the induction hypothesis [ A / X ], x : U [ A / X ]  t : T [ A / X ], so by rule I , [ A / X ]  x.t :


I U [ A / X ] T [ A / X ] = (U T )[ A / X ].
  x.t : U T


n
m
t: i Y
.(U T i ) r: j U [ B
j /Y
]
i =1 j =1
E

n
m
  (t) r : i j T i [ B
j /Y
]
i =1 j =1

n n
By the induction hypothesis [ A / X ]  t : ( i =1 i Y
.(U T i ))[ A / X ] and this type is equal to i =1 i
 
Y
.(U [ A / X ] T i [ A / X ]). Also [ A / X ]  r : ( mj=1 j U [ B
j /Y
])[ A / X ] = mj=1 j U [ B
j /Y
][ A / X ]. Since Y
is bound,

j /Y
][ A / X ] = U [ A / X ][ B
j [ A / X ]/Y
], and so, by rule E ,
we can consider it is not in A. Hence U [ B


n
m
[ A / X ]  (t) r : i j T i [ A / X ][ B
j [ A / X ]/Y
]
i =1 j =1

n
m
=( i j T i [ B
j /Y
])[ A / X ] .
i =1 j =1


n
t: i U i Y
/ F V () n n
By the induction hypothesis, [ A / X ]  t : ( i =1 i U i )[ A / X ] = i =1 i
i =1 n n
I U i [ A / X ]. Then, by rule I , [ A / X ]  t : i =1 i Y .U i [ A / X ] = ( i =1 i

n
Y .U i )[ A / X ] (in the case Y F V ( A ), we can rename the free variable).
t: i Y . U i
i =1
128 P. Arrighi et al. / Information and Computation 254 (2017) 105139


n
t: i Y . U i
Since Y is bounded, n we can consider Y / F V ( A ). By the induction hypothe-
n
sis [ A / X ]  t : ( i =1 i Y .U i )[ A / X ] = i =1 i Y .U i [ A / X ]. Then by rule E ,
i =1
E [ A / X ]  t : ni=1 i U i [ A / X ][ B /Y ]. We can consider
n X / F V ( B ) (in other n case,

n
just take B [ A / X ] in the -elimination), hence i =1 i U i [ A / X ][ B / Y ] = i =1 i
t: i U i [ B / Y ] U i [ B /Y ][ A / X ].
i =1

t:T By the induction hypothesis [ A / X ]  t : T [ A / X ], so by rule I , [ A / X ]  t : T [ A / X ] =


I ( T )[ A / X ].
t:T

t:T r:R + By the induction hypothesis [ A / X ]  t : T [ A / X ] and [ A / X ]  r : R [ A / X ], so by rule + I ,


t+r:T + R
I [ A / X ]  t + r : T [ A / X ] + R [ A / X ] = ( T + R )[ A / X ].

t:T T R By the induction hypothesis [ A / X ]  t : T [ A / X ], and since T R, then T [ A / X ] R [ A / X ],


so by rule , [ A / X ]  t : R [ A / X ].
t: R

2. We proceed by induction on the typing derivation of , x : U  t : T .


(a) Let , x : U  t : T as a consequence of rule ax. Cases:
t = x, then T = U , and so   t[b/x] : T and   b : U are the same sequent.
t = y. Notice that y [b/x] = y. By Lemma 4.3 , x : U  y : T implies   y : T .
(b) Let , x : U  t : T as a consequence of rule 0 I , then t = 0 and T = 0 R, with , x : U  r : R for some r. By the
induction hypothesis,   r[b/x] : R. Hence, by rule 0 I ,   0 : 0 R.
(c) Let , x : U  t : T as a consequence of rule I , then t = y .r and T = V R, with , x : U , y : V  r : R. Since our
system admits weakening (Lemma 4.3), the sequent , y : V  b : U is derivable. Then by the induction hypothesis,
, y : V  r[b/x] : R, from where, by rule I , we obtain   y .r[b/x] : V R. We are done since y .r[b/x] =
( y .r)[b/x]. n m
(d) Let , x : U  t : T as a consequence of rule E , then t = (r) u and T = i =1

j =1 i j R i [ B / Y ], with , x :
n m 
U r: i =1 i Y
.( V T i ) and , x : U  u : V [ B
/Y
]. By the induction hypothesis,   r[b/x] : ni=1 i
j =1 j
m n m
Y
.( V R i ) and   u[b/x] : j =1 j V [ B
/Y
]. Then, by rule E ,   r[b/x]) u[b/x] : i =1 j =1 i j R i [ B
/Y
].
n n
(e) Let , x : U  t : T as a consequence of rule I . Then T = i =1 i Y . V i , with , x : U  t : i =1 i V i and Y /
n
F V () F V (U ). By the induction hypothesis,   t[b/x]  : i =1 i V i . Then by rule I ,   t[b/x] : ni=1 i Y . V i .
n n
(f) Let , x : U  t : T as a consequence
of rule E , then T = i =1 i V i [ B /Y ] , with , x : U  t : i =1 i Y . V i . By the
n n
induction hypothesis,   t[b/x] : i =1 i Y . V i . By rule E ,   t[b/x] : i =1 i V i [ B /Y ].
(g) Let , x : U  t : T as a consequence of rule I . Then T = R and t = r, with , x : U  r : R. By the induction
hypothesis   r[b/x] : R. Hence by rule I ,   r[b/x] : R. Notice that r[b/x] = ( r)[b/x].
(h) Let , x : U  t : T as a consequence of rule + I . Then t = r + u and T = R + S, with , x : U  r : R and , x : U  u : S.
By the induction hypothesis,   r[b/x] : R and   u[b/x] : S. Then by rule + I ,   r[b/x] + u[b/x] : R + S. Notice
that r[b/x] + u[b/x] = (r + u)[b/x].
(i) Let , x : U  t : T as a consequence of rule . Then T R and , x : U  t : R. By the induction hypothesis,  
t[b/x] : R. Hence, by rule ,   t[b/x] : T . 2

A.2. Proof of Theorem 4.1

Theorem 4.1 (Weak subject reduction). For any terms t, t , any context  and any type T , if t R t and   t : T , then:

/ Group F, then   t : T ;
1. if R
2. if R Group F, then S T such that   t : S and   t : S.

Proof. Let t R t and   t : T . We proceed by induction on the rewrite relation.

Group E.

0 t 0 Consider   0 t : T . By Lemma 4.10, we have that T 0 R and   t : R. Then, by rule 0 I ,   0 : 0 R. We


conclude using rule .
P. Arrighi et al. / Information and Computation 254 (2017) 105139 129

1 t t Consider   1 t : T , then by Lemma 4.10, T 1 R and   t : R. Notice that R T , so we conclude using rule .
0 0 Consider   0 : T , then by Lemma 4.11, T 0 R. Hence by rules and 0 I ,   0 : 0 0 R and so we conclude
using rule .
( t) ( ) t Consider   ( t) : T . By Lemma 4.10, T R and   t : R. By Lemma 4.10 again, R S
with   t : S. Notice that ( ) S ( S ) T , hence by rules I and , we obtain   ( ) t : T .
(t + r) t + r Consider   (t + r) : T . By Lemma 4.10, T R and   t + r : R. By Lemma 4.12   t : R 1
and   r : R 2 , with R 1 + R 2 R. Then by rules I and + I ,   t + r : R 1 + R 2 . Notice that R 1 + R 2
( R 1 + R 2 ) R T . We conclude by rule .

Group F.

t + t ( + ) t Consider   t + t : T , then by Lemma 4.12,   t : T 1 and   t : T 2 with T 1 + T 2 T .


Then by Lemma 4.10, T 1 R and   t : R and T 2 S. By rule I ,   ( + ) t : ( + ) R. Notice that
( + ) R R + S T 1 + T 2 T .
t + t ( + 1) t and R = t + t (1 + 1) t The proofs of these two cases are simplied versions of the previous case.
t + 0 t Consider   t + 0 : T . By Lemma 4.12,   t : R and   0 : S with R + S T . In addition, by Lemma 4.11,
S 0 S  . Notice that R + 0 R R R + 0 S  R + S T .

Group B.

(x.t) b t[b/x] Consider   (x.t) b : T , then by Lemma 4.13, we have   x.t : ni=1 i X
.(U R i ) and  
m n m
b:



(x.t)b
j =1 j U [ A j / X ] where i =1 j =1 i j R i [ A j / X ] V , T . However, we can simplify these types us-
ing Lemma 4.15, and so we have   x.t : X
.(U R ) and   b : U [ A
/X
/X

] with R [ A
] (x.t)b T . Note that
V ,
X
/ F V () (from the arrow introduction rule). Hence, by Lemma 4.14, , x : V  t : S, with V S Vx,
.(U R ).
.t X

Hence, by Lemma 4.9, U V [ B


/Y
] and R S [ B
/Y
] with Y
/ F V (), so by Lemma 4.16(1), , x : U  t : R. Apply-
ing Lemma 4.16(1) once more, we have [ A
/X

, x : U[A
/X
/X

]  t[b/x] : R [ A
]. Since X


/X
/ F V (), [ A
] =  and we

/X

]  (x.t)b
/X

]  t[b/x]
can apply Lemma 4.16(2) to get   t[b/x] : R [ A V , T . So, by Lemma 4.8, R [ A V , T , which implies
  t[b/x] : T .

Group A.

(t + r) u (t) u + (r) u Consider   (t + r) u : T . Then by Lemma 4.13,   t + r : ni=1 i X
.(U T i ) and   u :
m n m +
. U [

A /

X ] where T [

A /

X ]  (t r)u
T . Then by Lemma 4.12,   t : R 1 and   r : R 2 , with
j =1 j j i =1 j =1 i j i j V ,
n

R 1 + R 2 i =1 i X .(U T i ). Hence, there exists N 1 , N 2 {1, . . . , n} with N 1 N 2 = {1, . . . , n} such that



R1 i X
.(U T i ) + i X
.(U T i ) and
iN1 \N2 iN1 N2

R2 i X
.(U T i ) + i X
.(U T i )
iN2 \N1 iN1 N2

where i N 1 N 2 , i + 
i = i . Therefore, using we get

t: i X
.(U T i ) + i X
.(U T i ) and
iN1 \N2 iN1 N2

r: i X
.(U T i ) + i X
.(U T i )
iN2 \N1 iN1 N2

So, using rule E , we get



m
m
  (t) u : i j T i [ A
j / X
] + i j T i [ A
j / X
] and
i N 1 \ N 2 j =1 i N 1 N 2 j =1


m
m
  (r) u : i j T i [ A
j / X
] + i j T i [ A
j / X
]
i N 2 \ N 1 j =1 i N 1 N 2 j =1
n m
Finally, by rule + I we can conclude   (t) u + (r) u : i =1 j =1 i j T i [ A
j / X
] (Vt+,r)u T . Then by Lemma 4.8,
n m
: i =1 j =1 i j T i [ A
j / X
] (Vt),u+(r)u T , so   (t) u + (r) u : T .
130 P. Arrighi et al. / Information and Computation 254 (2017) 105139


(t) (r + u) (t) r + (t) u Consider   (t) (r + u) : T . By Lemma 4.13,   t : ni=1 i X
.(U T i ) and   r + u :
m n m (t)(r+u)



j =1 j .U [ A j / X ] where i =1 j =1 i j T i [ A j / X ] V , T . Then by Lemma 4.12,   r : R 1 and   u : R 2 ,


m
with R 1 + R 2 . U [

A /

X ] . Hence, there exists M , M { 1, . . . , m} with M 1 M 2 = {1, . . . , m} such that


j =1 j j 1 2

R1
j / X
] +
j .U [ A
j / X
] and
j .U [ A
jM1 \M2 jM1 M2

R2
j / X
] +
j .U [ A
j / X
]
j .U [ A
jM2 \M1 jM1 M2

where j M 1 M 2 , j + 
j
= j . Therefore, using we get

r:
j / X
] +
j .U [ A
j / X
] and
j .U [ A
jM1 \M2 jM1 M2

u:
j / X
] +
j .U [ A
j / X
]
j .U [ A
jM2 \M1 jM1 M2

So, using rule E , we get



n
n
  (t) r : i j T i [ A
j / X
] + i j T i [ A
j / X
] and
i =1 j M 1 \ M 2 i =1 j M 1 M 2


n
n
  (t) u : i j T i [ A
j / X
] + i j T i [ A
j / X
]
i =1 j M 2 \ M 1 i =1 j M 1 M 2
n m
Finally, by rule + I we can conclude   (t) r + (t) u : i j T i [ A
j / X
]. We nish the case with Lemma 4.8.
i =1 j =1
 
( t) r (t) r Consider   ( t) r : T . Then by Lemma 4.13,   t : ni=1 i X
.(U T i ) and   r : mj=1 j
  
U[A
j/X

], where ni=1 mj=1 i j T i [ A
j/X
] ( t)r T . Then by Lemma 4.10, ni=1 i X
.(U T i ) R and   t : R.
V ,
n h
By Lemma 4.2, R i =1 i V i + k=1 k Xk , however it is easy to see that h = 0 because R is equivalent to a sum of
n
terms, where none of them is X. So R i =1 i V i . Without lost of generality (cf. previous case), take T i = T k for all
n n
i = k and h = 0, and notice that

i =1 i X .(U T i ) i =1 i V i . Then by Lemma 4.4, there exists a permutation


p such that i = p (i ) and X
.(U T i ) V p (i ) . Without lost of generality let p be the trivial permutation, and
n  

.(U T i ). Hence, using rule E ,   (t) r : ni=1 mj=1 i j T i [ A
so   t : i =1 i X
j/X

]. Therefore, by rule I ,
n m n m n m



  (t) r : i =1 j =1 i j T i [ A j / X ]. Notice that i =1 j =1 i j T i [ A j / X ] i =1 j =1 i j T i [ A


j/X
].
We nish the case with Lemma 4.8.  
(t) ( r) (t) r Consider   (t) ( r) : T . Then by Lemma 4.13,   t : ni=1 i X
.(U T i ) and   r : mj=1 j
  
U[A
j/X

], where ni=1 mj=1 i j T i [ A
j/X
] (t)( r) T . Then by Lemma 4.10, mj=1 j U [ A
j/X
] R and   r : R.
V ,
m h
By Lemma 4.2, R j =1 j V j + k=1 k Xk , however it is easy to see that h = 0 because R is equivalent to a sum
m
of terms, where none of them is X. So R j =1 j V j . Without lost of generality (cf. previous case), take A j = A k
m  
for all j = k, and notice that U [
j/X
A
] mj =1 j V j . Then by Lemma 4.4, there exists a permutation
j =1 j
p such that j = p ( j ) and U [ A
j/X
] V p ( j ) . Without lost of generality let p be the trivial permutation, and
m n m
so   r : [

] rule E ,   (t) r : i =1


j =1 i U A j / X . Hence, using j =1 i j T i [ A j / X ]. Therefore, by rule I ,
n m n m  



  (t) r : i =1 j =1 i j T i [ A j / X ]. Notice that i =1 j =1 i j T i [ A j / X ] ni=1 mj=1 i j T i [ A


j/X
].
We nish the case with Lemma 4.8.  
(0) t 0 Consider   (0) t : T . By Lemma 4.13,   0 : ni=1 i X
.(U T i ) and   t : mj=1 j U [ A
j/X

], where
n m n n


(0 ) t

i =1 j =1 i j T i [ A j / X ] V , T . Then by Lemma 4.11, i =1 i X .(U T i ) 0 R. By Lemma 4.2, R i =1 i


h n
V i + k=1 k Xk , however, it is easy to see that h = 0 and so R i =1 i V i . Without lost of generality, take T i = T k
n n
for all i = k, and notice that

i =1 i X .(U T i ) i =1 0 V i . By Lemma 4.4, i = 0. Notice that by rule E ,


n m  
  (0) t : i =1 j =1 0 T i [ A
j/X
], hence by rules 0 I and ,   0 : ni=1 mj=1 0 T i [ A
j/X

] (0)t T . By Lemma 4.8,
V ,
n m


0
i =1 j =1 0 T i [ A j / X ] V , T , so   0 : T .
 
(t) 0 0 Consider   (t) 0 : T . By Lemma 4.13,   t : ni=1 i X
.(U T i ) and   0 : mj=1 j U [ A
j/X

], where
n m m m


(t)0

i =1 j =1 i j T i [ A j / X ] V , T . Then by Lemma 4.11, j =1 j U [ A j / X ] 0 R. By Lemma 4.2, R j =1 j


h m
V j + k=1 k Xk , however, it is easy to see that h = 0 and so R
j =1 j V j . Without lost of generality, take
m m
A j = A k for all j = k, and notice that

j =1 j U [ A j / X ] j =1 0 V j . By Lemma 4.4, j = 0. Notice that by rule E ,


P. Arrighi et al. / Information and Computation 254 (2017) 105139 131

n m n m
  (t) 0 :
j/X
Ti[A
], hence by rules 0 I and ,   0 :
j/X
Ti[A
] (t)0 T . By Lemma 4.8,
i =1 j =1 0 i =1 j =1 0 V ,
n m

j/X

]  T . Hence,   0 : T .
j =1 0 Ti[A 0
i =1

Contextual rules. Follows from the generation lemmas, the induction hypothesis and the fact that is congruent. 2

Appendix B. Detailed proofs of lemmas and theorems in Section 5

B.1. First lemmas



Lemma 5.3. If A, B and all the Ai s are in RC, then so are A B, i Ai and i Ai .

Proof. Before proving that these operators dene reducibility candidates, we need the following result which simplies its
proof: a linear combination of strongly normalising terms, is strongly normalising. That is

Auxiliary Lemma (AL). If {ti }i are strongly normalising, then so is F (


t) for any algebraic context F .
Proof. Let
t = t1 , . . . , tn . We dene two notions.

A measure s on
t dened as the sum over i of the sum of the lengths of all the possible rewrite sequences starting
with ti .
An algebraic measure a over algebraic contexts F (.) dened inductively by a(ti ) = 1, a( F (
t) + G (t
 )) = 2 + a( F (
t)) +
a(G (t
 )), a( F (
t)) = 1 + 2 a( F (
t)), a(0) = 0.

We claim that for all linear algebraic contexts F () (in the sense of Remark 5.2) and all strongly normalising terms ti that
are not linear combinations (that is, of the form x, x.r or (s) r), the term F (
t) is also strongly normalising.
The claim is proven by induction on s(
t) (the size is nite because t is SN, and because the rewrite system is nitely
branching).

If s(
t) = 0. Then none of the ti reduces. We show by induction on a( F (
t)) that F (
t) is SN.
If a( F (
t)) = 0, then F (
t) = 0 which is SN.
Suppose it is true for all F (
t) of algebraic measure less or equal to m, and consider F (
t) such that a( F (
t)) = m + 1.
Since the ti are not linear combinations and they are in normal form, because s(
t) = 0, then F (
t) can only reduce
with a rule from Group E or a rule from group F. We show that those reductions are strictly decreasing on the
algebraic measure, by a rule by rule analysis, and so, we can conclude by induction hypothesis.
0 F (
t) 0. Note that a(0 F (
t)) = 1 > 0 = a(0).
1 F (
t) F (
t). Note that a(1 F (
t)) = 1 + 2 a( F (
t)) > a( F (
t)).
0 0. Note that a( 0) = 1 > 0 = a(0).
( F (
t)) ( ) F (
t). Note that a( ( F (
t))) = 1 + 2 (1 + 2 a( F (
t))) > 1 + 2 a( F (
t)) = a(( ) F (
t)).
( F (
t) + G (t
 )) F (
t) + G (
t ). Note that a( ( F (
t) + G (
t ))) = 5 + 2 a( F (
t)) + 2 a(G (
t )) > 4 + 2 a( F (
t)) +
2 a(G (
t )) = a( F (
t) + G (
t )).
F (
t) + F (
t) ( + ) F (
t). Note that a( F (
t) + F (
t)) = 4 + 4 a( F (
t)) > 1 + 2 a( F (
t)) = a(( + ) F (
t)).
F (
t) + F (
t) ( + 1) F (
t). Note that a( F (
t) + F (
t)) = 3 + 3 a( F (
t)) > 1 + 2 a( F (
t)) = a (( + 1) F (
t)).
F (
t) + F (
t) (1 + 1) F (
t). Note that a ( F (
t) + F (
t)) = 2 + 2 a( F (
t)) > 1 + 2 a( F (
t)) = a ((1 + 1) F (
t)).
F (
t) + 0 F (
t). Note that a ( F (
t) + 0) = 2 + a( F (
t)) > a( F (
t)).
Contextual rules are trivial.
Suppose it is true for n, then consider
t such that s(
t) = n + 1. Again, we show that F (
t) is SN by induction on a( F (
t)).
If a( F (
t)) = 0, then F (
t) = 0 which is SN.
Suppose it is true for all F (
t) of algebraic measure less or equal to m, and consider F (
t) such that a( F (
t)) = m + 1.
Since the ti are not linear combinations, F (
t) can reduce in two ways:
F (t1 , . . . ti , . . . tk ) F (t1 , . . . ti , . . . tk ) with ti ti . Then ti can be written as H (r1 , . . . rl ) for some algebraic context
H , where the r j s are not linear combinations. Note that


l
s(r j ) s(ti ) < s(ti ).
j =1

Dene the context

G (t1 , . . . , ti 1 , u1 , . . . ul , ti +1 , . . . tk ) =
F (t1 , . . . , ti 1 , H (u1 , . . . ul ), ti +1 , . . . tk ).
The term F (
t) then reduces to the term
132 P. Arrighi et al. / Information and Computation 254 (2017) 105139

G (t1 , . . . , ti 1 , r1 , . . . rl , ti +1 . . . tk ),

where

s(t1 , . . . , ti 1 , r1 , . . . rl , ti +1 . . . tk ) < s(
t).

Using the top induction hypothesis, we conclude that F (t1 , . . . ti , . . . tk ) is SN.
F (
t) G (
t), with a(G (
t)) < a( F (
t)). Using the second induction hypothesis, we conclude that G (
t) is SN
All the possible reducts of F (
t) are SN: so is F (
t).

This closes the proof of the claim. Now, consider any SN terms {ti }i and any algebraic context G (
t). Each ti can be written
as an algebraic sum of xs, x.ss and (r) ss. The context G (
t) can then be written as F (t
 ) where none of the t i is a linear
combination. From Remark 5.2, there exists a linear algebraic context F  (
t ; ) where
t is
t with possibly some repetitions,
and where F  (
t ) = F (
t ), when considered as terms. The hypotheses of the claim are satised: F  (
t ) is therefore SN. Since
it is ultimely equal to G (
t), G (
t) is also SN: the Auxiliary Lemma (AL) is valid.
Now, we can prove Lemma 5.3. First, we consider the case A B.

RC1 We must show that all t A B are in SN0 . We proceed by induction on the denition of A B.
Assume that t is such that for r = 0 and r = b, with b A, then (t) r B. Hence by RC1 in B, t SN0 .
Assume that t is closed neutral and that Red(t) A B. By induction hypothesis, all the elements of Red(t) are
strongly normalising: so is t.
The last case is immediate: if t is the term 0, it is strongly normalising.
RC2 We must show that if t t and t A B, then t A B. We again proceed by induction on the denition of
A B.
Let t be such that (t) 0 B and such that for all b A, (t) b B. Then by RC2 in B, (t ) 0 B and (t ) b B, and so
t A B.
If t is closed neutral and Red(t) A B, then t A B since t Red(t).
If t = 0, it does not reduce.
RC3 and RC4 Trivially true by denition.

Then we analyze the case i Ai .
 
RC1 We show that if t i Ai then t is strongly normalizing by structural induction on the justication of t i Ai .
Base case. t belongs to one of the Ai : itis then SN by RC1 .
CC1 . t is part of a list t
 where F (t
 ) i Ai for some alg. context F . The induction hypothesis says that F (t
 ) is SN.
This implies that t is also SN.
CC2 . t = F (t
 ) where F is an alg. context and ti Ai . The result is obtained using the auxiliary lemma (AL) and RC1
on the Ai s.
RC2 . t is such that s t where s is SN. This implies that t is also SN.
RC3 . t is closed neutral and Red(t) i Ai , then t is strongly normalising since all elements of Red(t) are strongly
normalising.
RC2 and RC3 Trivially true by denition.
RC4 Since 0 is an algebraic context, it is also in the set by CC2 .

Finally, we prove the case i Ai .

RC1 Trivial since for all i, Ai SN0 .


RC2 Let t i Ai , then i , t Ai and so by RC2 in Ai , Red(t) Ai . Thus Red(t) i Ai .
RC3 Let t N and Red(t) i A. Then i , Red(t) A i , and thus, by RC3 in Ai , t Ai , which implies t i Ai .
RC4 By RC4 , for all i, 0 Ai . Therefore, 0 i Ai .

This concludes the proof of Lemma 5.3. 2


n
Lemma 5.4. Any type T , has a unique (up to ) canonical decomposition T i =1 i Ui such that for all l, k, Ul  Uk .
n m
Proof. By Lemma 4.2, T i =1 i U i + j =1 j X j . Suppose that there exist l, k such that U l U k . Then notice that

T (l + k ) U l + i =l,k i U i . Repeat the process until there is no more l, k such that U l  U k . Proceed in the analogously
to obtain a linear combination of different X j . 2

Lemma 5.5. For any types T and A, variable X and valuation , we have J T [ A / X ]K = J T K ,( X + , X )(J A K ,J A K ) and J T [ A / X ]K =
J T K ,( X , X + )(J A K ,J A K ) .
P. Arrighi et al. / Information and Computation 254 (2017) 105139 133

Proof. We proceed by structural induction on T . On each case we only show the case of since the case follows
analogously.

T = X . Then J X [ A / X ]K = J A K = J X K ,( X + , X )(J A K ,J A K ) .
T = Y . Then JY [ A / X ]K = JY K = + (Y ) = JY K ,( X + , X )(J A K ,J A K ) .
Y = U R. Then J(U R )[ A / X ]K = JU [ A / X ]K J R [ A / X ]K . By the induction hypothesis, we have JU [ A / X ]K
J R [ A / X ]K = JU K ,( X , X + )(J A K ,J A K ) J R K ,( X + , X )(J A K ,J A K ) = JU R K ,( X + , X )(J A K ,J A K ) .
U = Y . V . Then J(Y . V )[ A / X ]K = JY . V [ A / X ]K which by denition is equal to BRC J V [ A / X ]K ,(Y + ,Y )(B,B) and
this, by the induction hypothesis, is equal to

BRC J V K ,(Y + ,Y )(B,B),( X + , X )(J A K ,J A K ) = JY . V K ,( X + , X )(J A K ,J A K ) .


 

T of canonical decomposition i i Ui . Then J T [ A / X ]K = i JUi [ A / X ]K , which by induction hypothesis is
i JU i K ,( X ,
+ X
) (J A K ,J A K ) = J T K ,( X ,
+ X
) (J A K ,J A K ) . 2

B.2. Proof of the Adequacy Lemma 5.6

We need the following results rst.

Lemma B.1. For any type T , if = (+ , ) and  = (+  ,  ) are two valid valuations over F V ( T ) such that X ,  ( X ) ( X )

 ( X ), then we have J T K J T K  and J T K  J T K .
and + ( X ) +

Proof. Structural induction on T .

T = X . Then J X K = + ( X ) +  ( X ) = J X K  and J X K  =  ( X ) ( X ) = J X K .

T = U R. Then JU R K = JU K J R K and JU R K  = JU K  J R K  . By the induction hypothesis JU K 
JU K , JU K JU K  , J R K J R K  and J R K  J R K . We proceed by induction on the denition of to show that
t JU K J R K , then t JU K  J R K  = JU R K 
Let t {t |(t) 0 J R K and b JU K , (r) b J R K }. Notice that (t) 0 J R K J R K  . Also, b JU K  , b JU K
and then (t) b J R K J R K  .
Let Red(t) JU R K and t N . By the induction hypothesis Red(t) JU R K  and so, by RC3 , t JU R K  .
Let t = 0. By RC4 , 0 is in any reducibility candidate, in particular it is in JU R K  .
Analogously, t JU K  J R K  , t JU K J R K = JU R K .
T = X .U . Then J X .U K = ARC JU K ,( X + , X )(A,A) . By the induction hypothesis we have JU K ,( X + , X )(A,A)
JU K  ,( X + , X )(A,A) . Hence we have that ARC JU K ,( X + , X )(A,A) ARC JU K  ,( X + , X )(A,A) = J X .U K  . The proof
 case J X .U K  J X .U K is analogous.
for the 
T i i Ui and T  U.  Then J T K = i JUi K . By the induction hypothesis  JUi K JUi K  . We proceed by induction
on the justication of t i JUi K to show that if t i JUi K then t i JUi K  .
 case. t belongs to one of the JUi K . We conclude using the fact that JUi K JUi K  : t then belongs to JUi K 
Base
i JUi K  .
 



CC1 . t belongs to i JU
 i K because it is in a list t where F (t )
 i JUi K for some alg. context F . Induction hypoth-
esis says that F (t
 ) i JUi K  . We then get t i JUi K  using CC1 .
CC2 . Let t = F (
r) where  F is an algebraic context and ri JUi K . Note that by induction hypothesis ri JUi K ,
ri JUi K  and so F  (
r) i JUi K  . n
RC2 . t t and t i =1 JUi K  . Invoking RC2 , t i =1 JUi K  .
n

RC3 . Red(t) J T K and t N . By the induction hypothesis Red(t) J T K  and so, by RC3 , t J T K  .
The case J T K  J T K is analogous. 2

Lemma B.2. For any type T , if = (+ , ) is a valid valuation over F V ( T ), then we have J T K J T K .

Proof. Structural induction on T .

T = X . Then J T K = ( X ) + ( X ) = J T K .
T = U R. Then JU R K = JU K J R K . By the induction hypothesis JU K JU K and J R K J R K . We must
show that t JU R K , t JU R K . Let t JU R K = JU K J R K . We proceed by induction on the denition
of .
Let t {t |(t) 0 J R K and b JU K , (t) b J R K }. Notice that (t) 0 J R K J R K and forall b JU K , b JU K ,
and so (t) b J R K J R K . Thus t JU K J R K = JU R K .
Let Red(t) JU R K and t N . By the induction hypothesis Red(t) JU R K and so, by RC3 , t JU R K .
Let t = 0. By RC4 , 0 is in any reducibility candidate, in particular it is in JU R K .
134 P. Arrighi et al. / Information and Computation 254 (2017) 105139

T = X .U . Then J X .U K = ARC JU K ,( X + , X )(A,A) . By the induction hypothesis

JU K ,( X + , X )(A,A) JU K ,( X + , X )(A,A) .
So ARC JU K ,( X + , X )(A,A) ARC JUK ,( X + , X )(A,A) which is J X .U K by denition.
T i i Ui and T  U.  Then J T K = i JUi K . By  JUi K JUi K . We proceed by induction
the induction hypothesis
on the justication of t i JUi K to show that t i JUi K implies t i JUi K . 
Base case. t JUi K for some i: by induction hypothesis t JUi K , which is included in i JUi K .
 



CC1 . t i JUi K because it is in a list t where F (t ) i JUi K for some alg. context F . Induction hypothesis says
 
that F (t
 ) i JUi K . We then get t i JUi K using CC1 .
CC2 . Let t = F (
r) where F is an algebraic context and ri JUi K . By induction hypothesis r JUi K , r JUi K and
so the resultholds by CC2 .  
RC2 . Let t i JU  hence by RC2 , t i JUi K 
i K and t t . By the induction hypothesis t i JUi K , .
RC3 . Let Red(t) i JUi K and t N . By the induction hypothesis Red(t) i JUi K and so, by RC3 , t i JUi K . 2
n
n B.3. Let {Ai }i =1n be a family
Lemma n of reducibility candidates. If s and t both belong to i =1 Ai , then so does s + t. Similarly, if
t i =1 Ai , then for any , t i =1 Ai .

Proof. Direct corollary of the closure under CC2 . 2

Lemma B.4. Suppose that x.s A B and b A, then (x.s) b B.

Proof. Induction on the denition of A B.

If x.s is in {t | (t) 0 B and b A, (t) b B}, then it is trivial


x.s cannot be in A B by the closure under RC3 , because it is not neutral, neither by the closure under RC4 , because
it is not the term 0. 2


.U K can be equivalently dened as a
Remark B.5. For the proof of adequacy, we show in the following lemma that J X
more general intersection, provided that is valid.

Lemma B.6. Suppose that = (+ , ) is a valid valuation. Then

BARC JU K ,( X + , X )(A,B) = ARC JU K ,( X + , X )(A,A) .

Proof. Suppose that t BARC JU K ,( X + , X )(A,B) , and pick any A RC. Let B := A: we have B A, so

t JU K ,( X + , X )(A,B) , and then t JU K ,( X + , X )(A,A) .


Since this is the case for all A RC, we conclude that

BARC JU K ,( X + , X )(A,B) ARC JU K ,( X + , X )(A,A) .


Now, suppose that t ARC JU K ,( X + , X )(A,A) . Pick any pair B A RC. Then t JU K ,( X + , X )(A,A) . From Lemma B.1
and because B A,

JU K ,( X + , X )(A,A) JU K ,( X + , X )(A,B) .
Since this is true for any pair B A RC, we deduce that

ARC JU K ,( X + , X )(A,A) BARC JU K ,( X + , X )(A,B) .


We therefore have the required equality. 2


.U K with the more
Remark B.7. Now, we can prove the Adequacy Lemma. In the proof, we replace the denition of J X
precise intersection proposed in Lemma B.6.

Lemma 5.6 (Adequacy Lemma). Every derivable typing judgement is valid: For every valid sequent   t : T , we have  |= t : T .

Proof. The proof of the adequacy lemma is made by induction on the size of the typing derivation of   t : T . We n look at
show in each case that  |= t : T , i.e. if T U, then t JUK or if T i =1 i .Ui
the last typing rule that is used, and 
n
in the sense of Lemma 5.4, then t i =1 JUi K ,i , for every valid valuation , set of valid valuations {i }n , and substitu-
tion JK (i.e. substitution such that (x : V )  implies x J V K ).
P. Arrighi et al. / Information and Computation 254 (2017) 105139 135

ax
Then for any , J, x : U K by denition we have x JU K . From Lemma B.2, we deduce that
, x : U  x : U x JU K .

t:T Note that , 0 = 0, and 0 is in any reducibility candidate by RC4 .


0I
0:0T

n
Let T V or T i =1 i Ui with n > 1. Then by the induction hypothesis,  for any , set
, x : U  t : T {i }n not acting on F V () F V (U ), and J, x : U K , we have t ni=1 JUi K ,i , or simply
I
  x.t : U T t JVK if T V.

In any case, we must prove that JK , (x.t) JU T K ,  , or what is the same x.t JU K ,  J T K ,  ,
where  does  not act on F V (). If we can show that b JU K ,  implies (x.t ) b J T K ,  , then we are done. Notice
n
that J T K ,  = i =1 JUi K ,  , or J T K ,  = JVK ,  Since (x.t ) b is a neutral term, we just need to prove that every
one-step reduction of it is in J T K , which by RC3 closes the case. By RC1 , t and b are strongly normalising, and so is
x.t . Then we proceed by induction on the sum of the lengths of all the reduction paths starting from (x.t ) plus the
same sum starting from b:

(x.t ) b (x.t ) b with b b . Then b JU K ,  and we close by induction hypothesis. 


(x.t ) b (x.t ) b with t t . If T V, then t JVK ,  , and by RC2 so is t . In other case t ni=1 JUi K ,i for
any {i }n not acting on F V (), take i , i = , so t J T K ,  and so are its reducts, such as t . We close by


induction hypothesis.
(x.t ) b t [b/x] Let  = ; x  b. Then  J, x : U K ,  , so t  J T K ,i . Notice that t [b/x] = t  .


n
m
t: i X
.(U T i ) r:
j/X
j U[A
]
i =1 j =1
E

n
m
  (t) r : i j T i [ A
j / X
]
i =1 j =1

Without loss of generality, assume that the T i s are different from each other

n(similarly for A j ). By the induction hy-


pothesis, for any , {i , j }n,m not acting on F V (), and JK we have t i =1 A
B
RC J(U T i )K , ,( X
, X
)(A
,B
)
m i +
and r

j =1 JU [ A j / X ]K , j ,
or if n = 1 = 1, t A
B
RC J(U T 1 )K ,( X
, X
)(A
,B
) and if m = 1 and 1 = 1, r
+

JU [ A j / X ]K . Notice that for any A


j/X

j , if U is a unit type, U [ A
] is still unit.
r i j

j/X
For every i , j, let T i [ A
] ij
Wk . We must show that for any , sets {i, j,k }ri, j not acting on F V () and
ij
k=1 k
 ij
JK , the term ((t) r) is in the set i =1n, j =1m,k=1r i j JWk K ,i jk , or in case of n = m = 1 = 1 = r 11 = 1, ((t) r)
1 K .
JW11
Since both t and r are strongly normalising, we proceed by induction on the sum of the lengths of their rewrite
sequence. The set Red(((t) r) ) contains:

(t ) r or (t ) r when t t or r r . By RC2 , the term t is in the set ni=1 A
B
RC J(U T i )K ,i ,( X
+ , X
)(A
,B
)
m
(or if n = 1 = 1, the term t is in A
B
RC J(U T 1 )K ,( X
, X
)(A
,B
) ), and r


+ j =1 JU [ A j / X ]K , j (or in JU [ A 1 / X ]K
if m = 1 = 1). In any case, we conclude by the induction hypothesis. 
(t1 ) r +(t2 ) r with t = t1 + t2 , where, t = t1 + t2 . Let s be the size of the derivation of   t : ni=1 i X
.(U
n
T i ). By Lemma 4.12, there exists R 1 + R 2 i =1 i X
.(U T i ) such that   t1 : R 1 and   t2 : R 2 can be derived
n
with a derivation tree of size s 1 if R 1 + R 2 = i =1 i X
.(U T i ), or of size s 2 in other case. In such case, there
exists N 1 , N 2 {1, . . . , n} with N 1 N 2 = {1, . . . , n} such that

R1 i X
.(U T i ) + i X
.(U T i ) and
iN1 \N2 iN1 N2

R2 i X
.(U T i ) + i X
.(U T i )
iN2 \N1 iN1 N2

where i N 1 N 2 , i + 
i
= i . Therefore, using we get
136 P. Arrighi et al. / Information and Computation 254 (2017) 105139


  t1 : i X
.(U T i ) + i X
.(U T i ) and
iN1 \N2 iN1 N2

  t2 : i X
.(U T i ) + i X
.(U T i )
iN2 \N1 iN1 N2

with a derivation three of size s 1. So, using rule E , we get



m
m
  (t1 ) r : i j T i [ A
j / X
] + i j T i [ A
j / X
] and
i N 1 \ N 2 j =1 i N 1 N 2 j =1


m
m
  (t2 ) r : i j T i [ A
j / X
] + i j T i [ A
j / X
]
i N 2 \ N 1 j =1 i N 1 N 2 j =1

with a derivation three of size s. Hence, by the induction hypothesis the term (t1 ) r is in the set
ij
JWk K ,i jk ,
i = N 1 , j =1m,k=1r i j
 ij
and the term (t2 ) r is in i = N 2 , j =1m,k=1r i j JWk K ,i jk . Hence, by Lemma B.3 the term (t1 ) r + (t2 ) r is in
 ij
the set i =1,...,n, j =1m,k=1r i j JWk K ,i jk . The case where m = 1 =
1 = r 11 = 1, and card( N 1 ) or card( N 2 ) is equal to
1 follows analogously.
(t ) r1 + (t ) r2 with r = r1 + r2 . Analogous to previous case. 
(t ) r with t = t , where t = t . Let s be the size of the derivation of   t : ni=1 i X
.(U T i ).
n n
Then by Lemma 4.10,
i =1 i

X .( U T i ) R and   t : R. If
i =1 i

X .( U T i ) = R, such a derivation is
n h
obtained with size s 1, in other case it is obtained in size s 2 and by Lemma 4.2, R i =1 i V i + k=1 k Xk ,
however it is easy to see that h = 0 because R is equivalent to a sum of terms, where none of them is X. So R
n   
i =1i V i . Notice that ni=1 i X
.(U T i ) ni=1 i V i . Then by Lemma 4.4, there exists a permutation p such


.(U T i ) V p (i ) . Then by rule , in size s 1 we can derive   t : ni=1 i X
that i = p (i ) and X
.(U T i ).
 
Using rule E , we get   (t ) r : i =1
n m
T [
j/X
A
] in size s. Therefore, by the induction hypothesis, (t ) r
j =1 i j i
 ij
is in the set i =1,...,n, j =1m,k=1r i j JWk K ,i jk . We conclude with Lemma B.3.
(t ) r with r = r . Analogous to previous case.
0 with t = 0, or r = 0. By RC4 , 0 is in every candidate.
The term t [r /x], when t = x.t and r is a base term. Note that this term is of the form t  where  = ; x  r.

We are in the situation where the types of t and r are respectively X
/X

.(U T ) and U [ A
], and so ij
i , j ,k JWk K ,i jk =
r
k=1 JWk K ,k , where we omit the index 11 (or directly JWK if r = 1). Note that

x.t J X
.(U T )K ,  = A
B
RC JU T K ,  ,( X
+ , X
)( A
, B
)

and B
for all possible  such that |  | does not intersect F V (). Choose A
equal to J A
K ,  and choose
 to send


every X in its domain to k k ( X ) and + to send all the X in its domain to k k+ ( X ). Then by denition of and
Lemma 5.5,

x.t JU T K ,  ,( X
+ , X
)(J A
K

,  ,J A K ,  )

/ X
]K ,  J T K 

= JU [ A .
, ,( X + , X )(J A
K

,  ,J A K ,  )


/X
Since r JU [ A
]K ,  , using Lemmas B.4 and 5.5,

(x.t ) r J T K ,  ,( X
+ , X
)(J A
K

,  ,J A K ,  )

/ X
]K , 
= JT [ A

n
= JWk K ,  or just JW1 K ,  if n = 1.
k =1

Now, from Lemma B.1, for all k we have JWk K ,  JWk K ,k . Therefore


n
(x.t ) r JWk K ,k .
k =1
P. Arrighi et al. / Information and Computation 254 (2017) 105139 137

n m r i j ij
Since the set Red(((t) r) ) i =1 j =1 k=1 JWk K ,i jk , we can conclude by RC3 .


n
t: i U i X / F V () By the induction  hypothesis, for any , set {i }n not acting on F V (), we have
i =1
JK , t ni=1 JU i K ,i (or t JU 1 K ,1 if n = 1 = 1). Since X / F V (),
I we can take i = i , ( X + , X )  (A, B), then for any B A, we have t

n n
t: i X . U i i =1 JU i K ,  ,( X + , X )(A,B) (or t JU 1 K ,  ,( X + , X )(A,B) if n = 1 = 1).
i 1
i =1

n
Since it is valid for any B A, we can take all the intersections, thus we have t i =1 BARC JU i K ,  ,( X + , X )(A,B) =
n i

i =1 J X .U i K ,  (or if n = 1 = 1 simply t BARC JU 1 K ,  ,( X + , X )(A,B) = J X .U 1 K ,  ).


i 1 1


n
n for any and for
By the induction hypothesis, nall families {i }n , we have for all in JK
t: i X . U i that the term t is in i =1 J X .U i K ,i = i =1 BARC JU i K ,  ,( X + , X )(A,B) (or if n =
i
i =1 1 = 1, t is in the set J X .U 1 K ,1 = BARC JU 1 K ,1 ,( X + , X )(A,B) ). Since it is in the in-
E n

n
tersections, we can chose A = J A K ,i and B = J A K ,i , and then t i =1 JU i K ,i , X  A =
t: i U i [ A / X ] n
i =1 JU i [ A / X ]K ,  (or t JU 1 K ,1 , X  A = JU i [ A / X ]K ,1 , if n = 1 = 1).
i =1 i

n 
Let T Ui , so T ni=1 i Ui . By the induction

i =1 i hypothesis, for any , we have
t:T I JK , t i =1 JUi K ,i . By Lemma B.3, ( t) = t ni=1 JUi K ,i . Analogous if n =
n
t:T 1 = 1.
n 
Let T i Ui1 and R mj=1 j U j2 . By the induction hypothesis, for any ,
i =1
t:T r: R + 
 
{i }n , { j }m , we have JK , t ni=1 JUi1 K ,i and r mj=1 JU j2 K ,  j . Then by
t+r:T + R
I 
Lemma B.3, (t + r) = t + r i ,k JUik K ,i . Analogous if n = 1 = 1 and/or m = 1 = 1.

n n
t:T T R Let T i =1 i Ui in the sense of Lemma 5.4, then since T R, R is also equivalent to i =1 i
Ui , so   t : T   t : R. 2
t:R

Appendix C. Detailed proofs of lemmas and theorems in Section 6

n
Theorem 6.1 (Characterisation of terms). Let T be a generic type with canonical decomposition
   i =1 i .Ui , in the sense 
of Lemma 5.4.
If  t : T , then t i =1
n mi m
bi j , where for all i,  bi j : Ui and j =i 1 i j = i , and with the convention that 0j=1 i j = 0
j =1 i j
0
and j =1 i j bi j = 0.

Proof. We proceed by induction on the maximal length of reduction from t.

Let t = b or t = 0. Trivial using Lemma 4.15 or 4.11, and Lemma 5.4.   


Let t = (t1 ) t2 . Then by Lemma 4.13,  t1 : ko=1 k X
.(U T k ) and  t2 : p l U [ A
l/X

], where ko=1 p k
l=1 l=1

l/X
l T k [ A
] (t1 )t2 T , for some V . Without loss of generality, consider these two types to be already canonical de-
V ,
compositions, that is, for all k1 , k2 , T k1  T k2 and for all l1 , l2 , U [ A
l /X
/X

]  U [ A
] (in other case, it suces to sum
1 o qk l2  p tl
up the equal types). Hence, by the induction hypothesis, t1 k=1 s=1 bks and t2 lr blr ,
tl l=1 r =1
ks
qk
where for all k,  bks : X
.(U T k ) and 

s=1 ks = k , and for all l,  blr : U [ A l / X ] and r =1 lr = l . By rule


E , for each k, s, l, r we have  (bks ) blr : T k [ A
l/ X

], where the induction hypothesis also apply, and notice that
 qk  p tl 
o qk  p tl 
(t1 ) t2 ( ko=1 s= 1 ks bks ) l=1 r =1 lr blr

k=1 s=1 l=1 r =1 ks lr (bks ) blr . Therefore, we con-
clude with the induction hypothesis.
Let
n t = r. Then by Lemma 4.10,  r : R, with R T . Hence, using Lemmas n 5.4 mand 4.4, R has a type decomposition

i =1 i Ui , where i = i . Hence, by the induction hypothesis, r bi j , where for all i,  bi j : Ui
i
i =1 j =1 i j
mi
n mi
n mi m mi
and j =1
i j = i . Notice that t = r i =1 j =1
i j b i j i = 1 j =1
i j bi j , and j =i 1 i j = j =1

i j = i = i .
Let t = t1 + t2 .  Then by Lemma 4.12,  t1 : T 1 and  t2 : T 2 , withT 1 + T 2 T . By Lemma 5.4, T 1 has canonical
m o
decomposition j =1 j V j and T 2 has canonical decomposition k=1 k Wk . Hence by the induction hypothe-
138 P. Arrighi et al. / Information and Computation 254 (2017) 105139

m  p j  qk 
p j
sis t1 j =1 jl b jl and t2 ko=1 s=
l=1 1 ks bks , where for all j,  b jl : V j and = j , and for
l=1 jl
 : W and
 qk
all k,  bks  = . In for all j , k we have V =
 W , then we are done since the canonical de-
k
ms=1 ks k
o j k
 , k such that V  = W  ,
composition of T is j =1 j V j + k=1 k W k . In other case, suppose there exists j j k
m o
then the canonical decomposition of T would be j =1, j = j  j V j + k=1,k=k  k W k + ( j  + k ) V j  . Notice that
 p j qk
 + s=
l=1 j l 1 k s = j + k . 2
  

References

[1] M. Alberti, Normal forms for the algebraic lambda-calculus, in: D. Pous, C. Tasson (Eds.), Journes francophones des langages applicatifs (JFLA 2013),
Aussois, France, 2013, accessible online at http://hal.inria.fr/hal-00779911.
[2] T. Altenkirch, J.J. Grattage, A functional quantum programming language, in: 20th Annual IEEE Symposium on Logic in Computer Science (LICS 2005),
IEEE Computer Society, 2005, pp. 249258.
[3] A. Arbiser, A. Miquel, A. Ros, The -calculus with constructors: syntax, conuence and separation, J. Funct. Program. 19 (5) (2009) 581631.
[4] P. Arrighi, A. Daz-Caro, A system F accounting for scalars, Log. Methods Comput. Sci. 1 (2012) 11.
[5] P. Arrighi, A. Daz-Caro, B. Valiron, A type system for the vectorial aspects of the linear-algebraic lambda-calculus, in: E. Kashe, J. Krivine, F. van Raams-
donk (Eds.), 7th International Workshop on Developments of Computational Methods (DCM 2011), in: Electron. Proc. Theoret. Comput. Sci., vol. 88,
Open Publishing Association, 2012, pp. 115.
[6] P. Arrighi, G. Dowek, A computational denition of the notion of vectorial space, in: N. Mart-Oliet (Ed.), Proceedings of the Fifth International Work-
shop on Rewriting Logic and Its Applications (WRLA 2004), in: Electron. Notes Theor. Comput. Sci., vol. 117, 2005, pp. 249261.
[7] P. Arrighi, G. Dowek, Lineal: a linear-algebraic lambda-calculus, Log. Methods Comput. Sci. 1 (2017) 8.
[8] A. Assaf, A. Daz-Caro, S. Perdrix, C. Tasson, B. Valiron, Call-by-value, call-by-name and the vectorial behaviour of the algebraic -calculus, Log. Methods
Comput. Sci. 4 (2014) 8.
[9] F. Barbanera, M. Dezani-Ciancaglini, U. de Liguoro, Intersection and union types: syntax and semantics, Inf. Comput. 119 (1995) 202230.
[10] H.P. Barendregt, Lambda-Calculi with Types, Handbook Log. Comput. Sci., vol. II, Oxford University Press, 1992.
[11] A. Bernadet, S.J. Lengrand, Non-idempotent intersection types and strong normalisation, Log. Methods Comput. Sci. 4 (2013) 3.
[12] G. Boudol, Lambda-calculi for (strict) parallel functions, Inf. Comput. 108 (1) (1994) 51127.
[13] O. Bournez, M. Hoyrup, Rewriting logic and probabilities, in: R. Nieuwenhuis (Ed.), Proceedings of RTA-2003, in: Lect. Notes Comput. Sci., vol. 2706,
Springer, 2003, pp. 6175.
[14] A. Bucciarelli, T. Ehrhard, G. Manzonetto, A relational semantics for parallelism and non-determinism in a functional setting, Ann. Pure Appl. Logic
163 (7) (2012) 918934.
[15] P. Buiras, A. Daz-Caro, M. Jaskelioff, Conuence via strong normalisation in an algebraic -calculus with rewriting, in: S. Ronchi della Rocca, E. Pimentel
(Eds.), 6th Workshop on Logical and Semantic Frameworks with Applications (LSFA 2011), in: Electron. Proc. Theoret. Comput. Sci., vol. 81, Open
Publishing Association, 2012, pp. 1629.
[16] H. Cirstea, C. Kirchner, L. Liquori, The Rho Cube, in: F. Honsell, M. Miculan (Eds.), Proceedings of FOSSACS-2001, in: Lect. Notes Comput. Sci., vol. 2030,
Springer, 2001, pp. 168183.
[17] E. De Benedetti, S. Ronchi Della Rocca, Bounding normalization time through intersection types, in: L. Paolini (Ed.), Proceedings of Sixth Workshop on
Intersection Types and Related Systems (ITRS 2012). EPTCS, Cornell University Library, 2013, pp. 4857.
[18] U. deLiguoro, A. Piperno, Non deterministic extensions of untyped -calculus, Inf. Comput. 122 (2) (1995) 149177.
[19] A. Di Pierro, C. Hankin, H. Wiklicky, Probabilistic -calculus and quantitative program analysis, J. Log. Comput. 15 (2) (2005) 159179.
[20] A. Daz-Caro, Barendregts proof of subject reduction for 2, accessible online on http://cstheory.stackexchange.com/questions/8891, 2011.
[21] A. Daz-Caro, G. Dowek, The probability of non-conuent systems, in: M. Ayala-Rincn, E. Bonelli, I. Mackie (Eds.), Proceedings of the 9th International
Workshop on Developments in Computational Models, in: Electron. Proc. Theoret. Comput. Sci., vol. 144, Open Publishing Association, 2014, pp. 115.
[22] A. Daz-Caro, G. Dowek, Simply typed lambda-calculus modulo type isomorphisms, hal-01109104, 2017.
[23] A. Daz-Caro, G. Dowek, Typing quantum superpositions and measurement, arXiv:1601.04294, 2017.
[24] A. Daz-Caro, G. Manzonetto, M. Pagani, Call-by-value non-determinism in a linear logic type discipline, in: S. Artemov, A. Nerode (Eds.), Interna-
tional Symposium on Logical Foundations of Computer Science (LFCS 2013), in: Lect. Notes Comput. Sci., vol. 7734, Springer, Berlin, Heidelberg, 2013,
pp. 164178.
[25] A. Daz-Caro, B. Petit, Linearity in the non-deterministic call-by-value setting, in: L. Ong, R. de Queiroz (Eds.), 19th International Workshop on Logic,
Language, Information and Computation (WoLLIC 2012), in: Lect. Notes Comput. Sci., vol. 7456, Springer, Berlin, Heidelberg, 2012, pp. 216231.
[26] D.J. Dougherty, Adding algebraic rewriting to the untyped lambda calculus, Inf. Comput. 101 (2) (1992) 251267.
[27] T. Ehrhard, A niteness structure on resource terms, in: Proceedings of LICS-2010, IEEE Computer Society, 2010, pp. 402410.
[28] T. Ehrhard, L. Regnier, The differential lambda-calculus, Theor. Comput. Sci. 309 (1) (2003) 141.
[29] J.-Y. Girard, Y. Lafont, P. Taylor, Proofs and Types, Camb. Tracts Theor. Comput. Sci., vol. 7, Cambridge University Press, 1989.
[30] L.K. Grover, A fast quantum mechanical algorithm for database search, in: Proceedings of the 28th Annual ACM Symposium on Theory of Computing.
STOC-96, ACM, 1996, pp. 212219.
[31] O.M. Herescu, C. Palamidessi, Probabilistic asynchronous -calculus, in: J. Tiuryn (Ed.), Proceedings of FOSSACS-2000, in: Lect. Notes Comput. Sci.,
vol. 1784, Springer, 2000, pp. 146160.
[32] J.-P. Jouannaud, H. Kirchner, Completion of a set of rules modulo a set of equations, SIAM J. Comput. 15 (4) (1986) 11551194.
[33] D. Kesner, D. Ventura, Quantitative types for the linear substitution calculus, in: J. Diaz, I. Lanese, D. Sangiorgi (Eds.), Theoretical Computer Science, in:
Lect. Notes Comput. Sci., vol. 8705, Springer, Berlin, Heidelberg, 2014, pp. 296310.
[34] J.-L. Krivine, Lambda-calcul: Types et Modles. tudes et Recherches en Informatique, Masson, 1990.
[35] U.D. Lago, M. Zorzi, Probabilistic operational semantics for the lambda calculus, RAIRO Theor. Inform. Appl. 46 (3) (2012) 413450.
[36] M. Pagani, S. Ronchi Della Rocca, Linearity, non-determinism and solvability, Fundam. Inform. 103 (14) (2010) 173202.
[37] B. Petit, A polymorphic type system for the lambda-calculus with constructors, in: P.-L. Curien (Ed.), Proceedings of TLCA-2009, in: Lect. Notes Comput.
Sci., vol. 5608, Springer, 2009, pp. 234248.
[38] E. Pimentel, S. Ronchi Della Rocca, L. Roversi, Intersection types from a proof-theoretic perspective, Fundam. Inform. 121 (14) (2012) 253274.
[39] P.W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput. 26 (5) (1997)
14841509.
[40] C. Tasson, Algebraic totality, towards completeness, in: P.-L. Curien (Ed.), 9th International Conference on Typed Lambda Calculi and Applications (TLCA
2009), in: Lect. Notes Comput. Sci., vol. 5608, Springer, 2009, pp. 325340.
P. Arrighi et al. / Information and Computation 254 (2017) 105139 139

[41] B. Valiron, Orthogonality and algebraic lambda-calculus, in: Proceedings of the 7th International Workshop on Quantum Physics and Logic (QPL 2010)
Oxford, UK, May 2930, 2010, pp. 169175, http://www.cs.ox.ac.uk/people/bob.coecke/QPL_proceedings.html.
[42] A. van Tonder, A lambda-calculus for quantum computation, SIAM J. Comput. 33 (2004) 11091135.
[43] L. Vaux, The algebraic lambda-calculus, Math. Struct. Comput. Sci. 19 (5) (2009) 10291059.

You might also like