Professional Documents
Culture Documents
An Exact Method For Assortment Optimization Under The Nested Logit Model
An Exact Method For Assortment Optimization Under The Nested Logit Model
An Exact Method For Assortment Optimization Under The Nested Logit Model
a r t i c l e i n f o a b s t r a c t
Article history: We study the problem of finding an optimal assortment of products maximizing the expected revenue,
Received 24 December 2019 in which customer preferences are modeled using a nested logit choice model. This problem is known to
Accepted 2 December 2020
be polynomially solvable in a specific case and NP-hard otherwise, with only approximation algorithms
Available online 14 January 2021
existing in the literature. We provide an exact general method that embeds a tailored Branch-and-Bound
Keywords: algorithm into a fractional programming framework. In contrast to the existing literature, in which as-
Combinatorial optimization sumptions are imposed on either the structure of nests or the combination and characteristics of prod-
Revenue management ucts, no assumptions on the input data are imposed. Although our approach is not polynomial in input
Assortment optimization size, it can solve the most general problem setting for large-size instances. We show that the fractional
Fractional programming programming scheme’s parameterized subproblem, a highly non-linear binary optimization problem, is
Nested logit decomposable by nests, which is the primary advantage of the approach. To solve the subproblem for
each nest, we propose a two-stage approach. In the first stage, we fix a large set of variables based on
the single-nest subproblem’s newly-derived structural properties. This can significantly reduce the prob-
lem size. In the second stage, we design a tailored Branch-and-Bound algorithm with problem-specific
upper bounds. Numerical results show that the approach is able to solve assortment instances with five
nests and with up to 50 0 0 products per nest. The most challenging instances for our approach are those
with a mix of nests’ dissimilarity parameters, where some of them are smaller than one and others are
greater than one.
© 2020 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.ejor.2020.12.007
0377-2217/© 2020 Elsevier B.V. All rights reserved.
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
according to her preferences over the available options. For empir- than or equal to one (which means that all products within the
ical work in this area, see, e.g., Lee (1999), Tiwari and Hasegawa same nest compete against each other). This assumption might
(2004), Train, McFadden, and Ben-Akiva (1987), Train, Ben-Akiva, be easily violated since some products might act synergistically.
and Atherton (1989), and Yates and Mackay (2006). This situation has been thoroughly explained in the research of
Davis et al. (2014) and Segev (2020). These are, to our knowl-
1.1. Problem definition edge, the only papers that study scenarios in which dissimilar-
ity parameters can be higher than one, and for that, the authors
We first provide a formal definition of the Assortment Opti- proposed approximation algorithms that provide assortments with
mization Problem under the Nested-Logit choice model (AOPNL). Let a worst-case performance guarantee (see Section 1.3 for further
M = {1, . . . , m} be the set of all nests, and N = {1, . . . , n} the index details).
set of all products in a nest and in every nest, up to |N| products This paper provides an exact method based on fractional-
can be offered. We assume that each product can appear in ex- programming to generate optimal assortments under the nested
actly one nest. We call Ci the collection of all feasible assortments logit model. Our algorithm is not polynomial in input size but
for nest i ∈ M, and an assortment in nest i is denoted by Si ∈ Ci . it is sufficiently fast to solve to optimality problems with up to
We also denote by rik 0 the revenue obtained by selling one unit 50 0 0 products and 5 nests. Contrary to the existing literature, our
of product k ∈ Si offered in nest i. Finally, customers’ preference for method can be used to solve a wide range of problem variants as
product k in nest i is vik 0. it imposes no assumption on the input data. The key point of the
Assuming a customer chooses nest i and assortment Si ∈ Ci is method is that the parameterized subproblem at each iteration of
offered for nest i, based on MNL choice probabilities, the probabil- the main fractional programming algorithm, which is a highly non-
ity for that customer to choose product k is: linear binary optimization problem, can be decomposed by nests.
v This enables us to solve a collection of subproblems that are rela-
ik k ∈ Si tively easier to handle. We prove in Section 3.1 that the subprob-
Pik (Si ) = vi0 + j∈Si vi j i ∈ M, k ∈ N, (1)
lem is still NP-hard; however, it can be tackled with an efficient,
0 k∈
/ Si
tailored branch-and-bound algorithm.
where vi0 0 is the attractiveness of leaving nest i without any Each subproblem for each nest is solved in two phases. First,
purchase. Furthermore, the probability that a particular nest i is a pre-processing enables to reduce the size of the subproblem by
chosen given that assortment (S1 , . . . , Sm ) ∈ C1 × · · · × Cm is offered identifying revenue thresholds beyond which products are guaran-
is: teed to be offered or not. Then a Branch-and-Bound (B&B) algo-
(vi0 + k∈Si vik )γi rithm solves the reduced problem, using both pre-processing and
Qi ( S1 , . . . , Sm ) = i ∈ M, tailored upper bounds at each node. The pre-processing enables
V0 + l∈M (vl0 + k∈Sl vlk )γl
a large set of variables to be predetermined at each iteration via
where V0 0 is the attractiveness of the option of leaving the sys- structural properties of the single-nest subproblem’s newly-driven
tem without picking any nest (in the first stage), and γi 0 is the objective function. To the best of our knowledge, this research is
dissimilarity parameter of nest i. The dissimilarity parameter of a the first to provide a generic non-trivial exact method for the as-
nest characterizes the degree of dissimilarity between the products sortment optimization problem under the nested logit model with
within that nest. It can also be interpreted as the influence of a proven optimality and practical efficiency (by non-trivial, we mean
nest over others (Désir, Goyal, & Zhang, 2014; Gallego & Topaloglu, not based on a full enumeration of candidate assortments), which
2014). If assortment Si is offered in nest i and if this nest is se- is flexible enough to adapt to any mix of dissimilarity parame-
lected, the expected revenue is: ters and any mix of utilities of no-purchase options. Moreover, our
method is effective for instances of realistic size involving thou-
Ri ( Si ) = rik Pik (Si ),
sands of products, for which optimal assortments can be computed
k∈Si
within short computing time.
where Ri (∅ ) = 0. Therefore, if we offer assortment (S1 , . . . , Sm ) The paper is organized as follows. In the remainder of this sec-
where Si ∈ Ci , for i ∈ M, the expected revenue we obtain over all tion, we provide a detailed literature overview. The general frame-
nests is: work of our parametric search and the decomposition of the pa-
rameterized subproblem by nests are explained in Section 2. In
(S1 , . . . , Sm ) = Qi ( S1 , . . . , Sm ) rik Pik (Si ).
Section 3, the parameterized subproblem is analyzed with the
i∈M k∈Si
identification of the products that are sure to be offered or not
Finally, the assortment optimization problem where customer at optimality, based on revenues, and the branch-and-bound al-
choice follows a nested logit model is to find an assortment so that gorithm for solving the reduced problem is provided. Section 4 is
the expected revenue is maximized, i.e., we have: dedicated to numerical experiments to assess the performance of
the method on many instances of various sizes, and we conclude
Z = max (S1 , . . . , Sm ). (2)
(S1 ,...,Sm ) our research in Section 5.
Si ∈Ci ,i∈M
Davis, Gallego, and Topaloglu (2014) proved that unless γi 1 1.3. Related literature
and vi0 = 0 for all i ∈ M, problem (2) is NP-hard. In this article, we
provide a fast non-trivial exact solution framework for solving the The incorporation of choice models in assortment optimization
AOPNL in a general setting, even when γi > 1 or vi0 > 0, for some problems (AOP) has attracted much attention in the recent decade.
i ∈ M. One of the most popular discrete choice models is the multi-
nomial logit model. Since Talluri and Ryzin (2004) demonstrated
1.2. Motivation and main contribution that under the multinomial logit model, offering revenue-ordered
assortments maximizes the expected revenue, many researchers
The literature of assortment optimization under the nested logit studied this class of problem under various settings. For exam-
model is quite rich concerning analytical properties of the prob- ple, Flores, Berbeglia, and Hentenryck (2019) demonstrated that of-
lem under various assumptions. For example, one primary as- fering revenue-ordered assortments provides the optimal solution
sumption is that the dissimilarity parameter of all nests is less if the consumers’ choice model follows a sequential multinomial
831
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
logit, which is a generalized version of the classic MNL. In the nest dissimilarity parameters are below one or not and whether
context of robust optimization, Rusmevichientong and Topaloglu the no-purchase option exists for each nest or not. They proved
(2012) demonstrated that even if the utilities of products are un- that if in all nests, products compete with each other, and if
known yet belong to either a polyhedron or a rectangular uncer- there is no option of leaving the nest without a purchase, the
tainty set, the cost of robustness is small, and the revenue-ordered optimal assortment has sorted-by-revenue products for each nest.
assortments still provide maximum expected revenue. They prove that if the dissimilarity parameter is less than 1 for
The multinomial logit model was developed based on two main all nests and customers can leave the nest without purchasing
assumptions; 1) customers follow the utility maximization princi- anything, their heuristic has a worst-case performance guarantee
ple, and 2) the utilities of products are independent of each other. of 2. If the dissimilarity parameter is free and customers can-
In real-world conditions, however, the second assumption might not leave the nest without a purchase, offering sorted-by-revenue
be violated. To remedy this potential shortcoming of MNL, re- assortments in each nest we can get a worst-case performance
searchers developed other utility-maximizing models such as the guarantee of max{ρ , 2κ}. Here κ bounds the ratio between the
nested logit model by Williams (1977). Detailed justifications of largest and the smallest preference parameters in a nest, i.e., κ =
max v max r
this model are available in Ben-Akiva and Lerman (1985), Börsch- maxi∈M { min k∈N v ik }, and ρ = maxi∈M { min k∈N r ik }. Finally, for the most
k∈N ik k∈N ik
Supan (1990), and McFadden (1980). We focus our literature re- general case with free dissimilarity parameters and customers hav-
view on the research that primarily uses variants of the nested ing the no-purchase option, the approach of Davis et al. (2014) pro-
logit model to incorporate the customer choice behavior in the as- vides a performance guarantee of 2κ . They also give a pseudo-
sortment optimization problem. The available literature is rich and polynomial-time algorithm and an approximation algorithm with
shows how active this area of research is. a performance guarantee of δ 2γmax +1 for any given δ > 1. If δ is
Focusing on a popular form of this nested-logit model with only close to one, the performance of the obtained assortment is better,
two levels – that is simply referred to as nested logit (NL) in the but with a higher computational time. Regarding the complexity of
literature of AOP – and dealing with revenue maximization instead the problem, the authors also show that if customers have the no-
of pricing, Li and Rusmevichientong (2014) developed a greedy al- purchase option or some nest dissimilarity parameters are greater
gorithm to solve the AOP in polynomial time when the nest dis- than one, the AOPNL is NP-hard. This complexity motivated our re-
similarity parameter is less than or equal to one for all nests and search.
customers have a no-purchase option. In another class of AOP un- A generic nested logit model with more than two levels is
der the nested logit model, Gallego and Topaloglu (2014) showed studied in the research of Li, Rusmevichientong, and Topaloglu
that if one imposes capacity and cardinality constraints on each (2015), for a pricing optimization problem. More recently, Chen
nest separately, it is possible to get an optimal assortment by re- and Jiang (2019) incorporated product pricing in the context of as-
ducing the AOP to a knapsack problem. When having nest-specific sortment optimization also under the d-level nested logit model.
space or capacity constraints, however, they show how to come up They studied cardinality and capacity constraints and provided an
with a combination of products as a candidate assortment for each -approximate solution that can be found in fully polynomial-time
nest to have a worst-case approximation guarantee of 2. under capacity constraint. They also developed an algorithm for
When capacity constraints are not defined on each nest sepa- the case of cardinality and nest-specific constraints that can obtain
rately but over all nests, Feldman and Topaloglu (2015) show that the optimal solution in strongly polynomial time.
if each product consumes one unit of capacity, the problem can be While the above articles consider static settings when demand
solved to optimality using an algorithm that sequentially solves a parameters are known a priori, Chen et al., (2020) studied a dy-
linear and dynamic program. This approach will not return an op- namic assortment planning problem in which customer choice be-
timal solution if we have a general capacity constraint across all havior follows a nested logit model and the decision-maker learns
nests, and for that case, they propose an algorithm that outputs the demand as the assortment changes dynamically. As is com-
a 4-approximate solution. In a different context, Davis, Topaloglu, mon in the literature of dynamic assortment planning, the authors
and Williamson (2016) studied a pricing problem where customer focused on a regret minimization problem and proposed efficient
choices follow a nested logit model, and there is a relationship be- policies.
tween product quality and price, which they call quality consis- In Table 1, we classify the literature on the AOP under the
tency constraint. For this problem, they developed a polynomial- nested logit model based on the value of the dissimilarity param-
time algorithm with proven optimality. In all the above articles eter of each nest and whether the proposed method provides an
where the focus is on the assortment planning problem under a optimal solution or an approximation.
nested logit model, the dissimilarity parameter is considered to be
less than or equal to one.
Focusing only on a nest-specific capacity constraint, Désir et al.
(2014) developed a fully polynomial-time algorithm to provide
an approximation scheme for the assortment optimization prob- 2. Fractional programming approach
lem under various choice models, including nested logit. Segev
(2020) developed an approximation scheme for the capacitated Recall from the previous section, the assortment optimization
AOP under the nested logit model in its most general form. The problem when customers’ choice model follows a nested logit can
author proved that the approximation runs in a fully polynomial be written as:
time, and for any > 0, the obtained solution can be approxi-
mated within a factor of 1 − of the optimal solution. Mai and Z = max (S1 , . . . , Sm ) where
Lodi (2019) provided a general heuristic approach that can handle (S1 ,...,Sm )
Si ∈Ci ,i∈M
various AOPs under MNL, MMNL, and NL. They developed an it-
erative approach based on approximating the non-linear objective
n
(S1 , . . . , Sm ) = Qi ( S1 , . . . , Sm ) Pik (Si )rik
function by a linear one, combined with a greedy local search, that
i∈M k=1
manages to find very good solutions for constrained problems.
In their thorough research, Davis et al. (2014) studied the same
( vi0 + vik )γi rik vik
k∈Si k∈Si
version of the AOP with the nested logit model, which is the fo- = ×
V0 + (vl0 + vlk )γl vi0 + vik
cus of our manuscript. They divide their work based on whether i∈M
l∈M k∈Sl k∈Si
832
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Table 1
An overview of research on assortment planning under nested logit.
γ ∈ [0, 1] γ free
Solution Unrestricted Cardinality cons. Capacity cons. Unrestricted
Exact Davis et al. (2014), Li and Feldman and Topaloglu (2015), our paper
Rusmevichientong (2014) Gallego and Topaloglu (2014)
Approx. Désir et al. (2014),Feldman and Davis et al. (2014)
Topaloglu (2015), Gallego and
Topaloglu (2014), Segev (2020)
( vi0 + vik )γi −1 rik vik Theorem 1. Let Z ∗ be the optimal value of problem (3). If one can
i∈M
k∈Si
k∈Si find a λ∗ for which Fpar (λ∗ ) = 0, then the collection of assortments
= . (3)
V0 + ( vi0 + vik )γi (S1 (λ∗ ), . . . , Sm (λ∗ )) defined as the solution of problem (5) solved
i∈M k∈Si with λ = λ∗ , is the optimal solution to problem (4) (equivalent to
problem (3)) and Z ∗ = λ∗ .
2.1. Problem reformulation
We now show that the above parameterized problem is decom-
Let xSi be a binary variable which is set to one if and only if posable by nest.
we offer assortment Si ∈ Ci . Then AOPNL can be rewritten as the
following non-linear binary program with an exponential number 2.2. Decomposability of the parameterized subproblem
of binary variables:
Proposition 1. The parameterized problem (5) is separable by nest
( vi0 + vik )γi −1 rik vik xSi
and can be written as:
= ND((xx))
i∈M Si ∈Ci k∈Si k∈Si
max (4)
V0 + ( vi0 + vik )γi xSi Fpar (λ ) = −λV0
i∈M Si ∈Ci k∈Si ⎧ γi −1 ⎫
⎨ ⎬
s.t: xSi = 1 i∈M + max vi0 + vik xik vik (rik − λ )xik − λvi0
Si ∈Ci
i∈M
⎩ k∈N k∈N
⎭
xSi ∈ {0, 1}, Si ∈ Ci , i ∈ M.
s.t: xik ∈ {0, 1}, k ∈ N, i ∈ M. (6)
Observe that if Si ∈ Ci , for all i ∈ M are explicitly generated, then
the expressions before variables xSi in the nominator N (x ) and de- where xik = 1 if and only if product k ∈ N is offered in nest i ∈ M.
nominator D(x ) of (4) are constant coefficients, and so the prob-
Proof. Since in problem (5) we have simple assignment constraints
lem boils down to a binary fractional program in variables xSi . In-
of choosing one assortment per nest, we observe that the objective
teger fractional programming is a powerful modeling tool which
of the parameterized program can be rewritten as:
has been successfully applied to a wide range of applications
including landscape ecology and forest fragmentation (Billionnet,
2010), biodiversity conservation (Billionnet, 2013), inventory rout- −λV0 + max ( vi0 + vik )γi −1 vik (rik − λ ) − λvi0 .
Si ∈Ci
ing (Archetti, Desaulniers, & Speranza, 2017), portfolio optimization i∈M k∈Si k∈Si
and wind-farm optmization (Sethuraman & Butenko, 2015), or net-
Hence, the program is decomposable by nest, and the maximiza-
work optimization (Klau, Ljubić, Mutzel, Pferschy, & Weiskircher,
tion subproblem for nest i is given as:
2003).
Following Dinkelbach (1967) by Megiddo (1978) proposed a
method to solve fractional programs with 0–1 variables x ∈ X, max ( vi0 + vik )γi −1 vik (rik − λ ) − λvi0 . (7)
Si ∈Ci
where X is a set of linear constraints. The idea is to iteratively k∈Si k∈Si
solve a so-called parameterized problem Fpar (λ ) = N (x ) − λD(x ) un- We can see the above problem as a binary non-linear program
til N (x ) − λD(x ) = 0 (in practice, |N (x ) − λD(x )| where is a with decision variables xik indicating whether product k ∈ N is of-
very small positive real number). The final value of λ that sets fered in nest i ∈ M, and so the whole problem can be reformulated
N (x ) − λD(x ) to be zero is the optimal ratio (λ∗ ), and the vector like in (6).
of variables x∗ corresponding to λ∗ is the vector optimizing the ra-
tio, i.e., This result enables us to speed up the resolution of the sub-
problem by separately solving a smaller problem for each nest
N (x ) N ( x∗ ) i ∈ M. To find the optimal value of λ, we use the framework pro-
max = = λ∗ .
x∈X D (x ) D ( x∗ ) posed by Megiddo (1978) that is illustrated in Algorithm 1. The al-
gorithm is a dichotomic search on λ and it is based on the ability
For the AOPNL, the corresponding iterative parameterized prob-
to solve the parametric problem in an exact way. It is also based
lem Fpar (λ ) is:
on the fact that function Fpar (λ ) in (6) is piece-wise linear in λ.
We refer the reader to Dinkelbach (1967) and Megiddo (1978) for
Fpar (λ ) = max ( vi0 + vik )γi −1 rik vik further details on the algorithm and its proven convergence to op-
(S1 ,...,Sm )
Si ∈Ci ,i∈M i∈M k∈Si k∈Si timality.
2.3. Algorithmic details
− λ V0 + ( vi0 + vik )γi . (5)
i∈M k∈Si
To initialize our parametric search, we first need to determine
The following theorem follows from the findings in Megiddo a lower bound λl and an upper bound λu for λ∗ . The value of λl
(1978). corresponds to a primal bound of the AOPNL that can be obtained
833
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
834
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Define constant fixed to 1 throughout the B&B tree until node t. We denote this set
by K1t−1 . We also have a set of variables that have been fixed to 0.
C1 = vn+1 (rn+1 − λ ) − λv0 .
We use K0t−1 to denote this set. Finally, we have a set of undecided
After substituting vn+1 and rn+1 by their expression above in C1 , variables denoted by K̄ t−1 . We define
we obtain
C1t−1 = v j (r j − λ ) − λv0 ,
γ +1
C1 = B . j∈K1t−1
γ −1
as the sum of the weighted revenue of products offered thus far in
Based on Proposition 7, we know that product n + 1 is guaranteed
a particular nest. We also define
to be offered in any optimal assortment since rn+1 > λ and from
the formulation above, C1 > 0. Therefore, x∗n+1 = 1, and after de- V1t−1 = v j + v0 ,
noting V1 = v0 + vn+1 = B, the subproblem reduces to the following j∈K1t−1
problem (Q):
as the sum of preferences for the respective products. Using these
γ −1
n
n notations, at each node t in the B&B tree, the objective function to
(Q ) max n f (x ) = V1 + vk xk × C1 + vk ( rk − λ )xk . be maximized is:
x∈{0,1}
k=1 k=1 γ −1
(12) max f t (x ) = V1t−1 + vk xk
x∈{0,1}|K̄ t−1 |
We show that we can find a solution of (Q) with value at least k∈K̄ t−1
( 2B )γ
γ −1 if and only if the answer to the subset-sum decisionproblem
is yes, i.e., there exists a subset S ⊂ {1, . . . , n} such that k∈S ak = × C1t−1 + vk ( rk − λ )xk . (13)
B. Observe that with the above setting, our (NLAPP) instance can k∈K̄ t−1
be rewritten as:
We naturally branch on binary variable xk ∈ K̄ t−1 fixing xk = 1
γ −1
n
n and xk = 0 in the two child nodes of a node t of the tree. In the
max f (x ) = V1 + ak xk × C1 − ak xk branching ordering, we choose to branch on a variable with high-
x∈{0,1} n
k=1 k=1 est revenue, as fixing such variables to one has a strong poten-
n tial impact on the value of objective function. We enumerate the
γ −1
= (V1 + g(x ) ) × (C1 − g(x ) ) with g(x ) = ak xk . branching tree in a depth-first-search fashion, following branches
k=1 that fix variables to one. The main issue is how to estimate an up-
Now define the function h(X ) = (V1 + X )γ −1 × (C1 − X ) over per bound on the objective value of a node, say t, that is both i)
[0, C1 ], such that h(g(x )) = f (x ). Observe that without loss of gen- tight enough to enable efficient pruning, and ii) tractable enough
erality we can focus on the interval [0, C1 ], since for X > C1 we to be quickly computed, despite the non-linearity of the objective.
have h(X ) < 0. Function h is differentiable and we have Observe that the structure of the subproblem varies with the value
of the nest dissimilarity parameter γ , considering two cases: γ 1
h (X ) = (C1 − X )(γ − 1 )(V1 + X )γ −2 − (V1 + X )γ −1 and γ > 1.
(C1 − X )(γ − 1 ) It turns out that the subproblem has interesting characteris-
= (V1 + X )γ −1 −1 tics that allow to perform a preprocessing procedure and eliminate
V1 + X
some of the variables without branching, which in turn speeds up
γ − 1 V1 the computation time. We update sets K̄ t−1 , K0t−1 and K1t−1 and
= 0 if and only if X = C1 − = B.
γ γ constants C1t−1 and V1t−1 by removing or adding variables that were
Since h (X ) ≥ 0 if and only if X B, function h is increasing fixed either by branching or the preprocessing. In the next sections,
then decreasing over [0, C1 ] (observe h(C1 ) = 0 ) and reaches it we provide detailed descriptions of this preprocessing procedure
maximum for X = B, with and upper bound calculations, for each case γ 1 and γ > 1.
γ +1
h(B ) = (2B )γ −1 × B −B 3.3. Competitive products (γ 1)
γ −1
2 ( 2B )γ We identify three possible scenarios at the root node. For each
= 2γ −1 Bγ = > 0; scenario, we derive some measures to perform a preprocessing and
γ −1 γ −1
potentially fix a relatively large number of variables before enter-
γ
We conclude that x ∈ {0, 1}n reaches value (γ2B−1
)
for the objec- ing the B&B tree. We first observe that if γ 1, we can reformu-
n late the objective function (11) at the root node 0 as:
tive function of (Q) if and only if g(x ) = k=1 ak xk = B which is
⎧ ⎫
a solution for the Subset-Sum decision problem. Since the latter is ⎨ vk (rk − λ )xk − λv0 ⎬
NP-complete and the reduction described above is a polynomial re- k∈N
max f 0 (x ) = , (14)
duction, this completes the NP-completeness proof for the decision x∈{0,1}|N| ⎩ ( v0 + v k x k ) 1 −γ ⎭
version of (NLAPP). k∈N
We now describe the Branch-and-Bound (B&B) algorithm. We and we proceed to the following proposition on preprocessing at
classify our analysis based on the value of the dissimilarity param- the root node. We provide pseudo-codes of all preprocessing pro-
eter of a nest and consequently, the properties of the subproblem cedures in Appendices B.1 through B.4.
for a nest i change according to the value of γi .
Proposition 4. At the root node (t = 0), set H = k∈N:r λ vk (rk −
k
λ ) − λv0 , and let x denote an optimal solution of the subproblem.
∗
3.2. A tailored Branch-and-Bound algorithm
We have three possible scenarios based on the value of H:
We develop a tailored B&B algorithm to determine the optimal i) if H = 0 (Case 0), the problem is solved by preprocessing with-
assortment for each nest. We have a set of variables that have been out branching; defining K10 = {k ∈ N : rk λ} and K00 = {k ∈ N :
835
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
rk < λ}, the optimal solution of the subproblem is x∗k = 1 for k ∈ (15). For the case where C1t > 0, we derive a bound in the follow-
K10 and x∗k = 0 for k ∈ K00 . ing proposition (proof in Appendix A.4).
ii) if H > 0 (Case 1), let K00 = {k ∈ N : rk < λ}, then x∗k = 0 for
Proposition 6. [Upper bound of f t (x ) in (15)] If C1t > 0, then
k ∈ K00 . Moreover, define K = N \ K00 and rmax = max j∈N {r j } and
let K10 = {k ∈ N \ K00 : rk λ + (1 − γ )(rmax − λ − (v0 rmax )/(v0 + f t (x ) (V1t )γ −1 C1t ez1 (t ) ,
∗
(16)
j∈K v j ))}, then xk = 1 for k ∈ K1 .
∗ 0
iii) if H < 0, (Case 2) let K10 = {k ∈ N : rk λ}, then x∗k = 1 for where
⎛ ⎞
k ∈ K10 . Moreover, we define K = N \ K10 , rmin = min j∈K {r j }, and
2 (γ − 1 ) (r − λ )
= C10 − V10 (rmin − λ ), then we have two cases: z1∗ (t ) = vk max ⎝0, t + k t ⎠.
• If 0, x∗k = 0 for k ∈ K00 = 2V1 + vk C1
k∈K̄
k∈K̄ t
C10 + (rmin −λ ) j∈K v j
k ∈ K : rk λ + ( 1 − γ ) .
V10 + j∈K v j Preliminary numerical experiments enabled to show that using
• If < 0, then x∗k = 0 for k ∈ K00 = this bound for pruning branches improves the computation time of
0 0 the Branch-and-Bound algorithm (by roughly 30%).
k ∈ K : rk λ + (1 − γ ) C1 /V1 .
Then x∗k = 1 for k ∈ K1t . We note then the objective function after preprocessing at node
ii) if H < 0 (Case 2), we define rmin = min j∈K̄ t−1 {r j } and = t:
C1t−1 − V1t−1 (rmin − λ ), then we have two cases: γ −1
• If 0, x∗k = 0 for k ∈ K0t =
max f t (x ) = V1t + vk xk × C1t + vk ( rk − λ )xk .
C1t−1 + (rmin −λ ) v x∈{0,1}|K̄ t |
k ∈ K̄ t−1 : rk λ + (1 − γ )
j∈K̄ t−1 j
. k∈K̄ t k∈K̄ t
V1t−1 + v
j∈K̄ t−1 j
(17)
• If < 0, then x∗k = 0 for k ∈ K0t = {k ∈ K̄ t−1 : rk
λ + (1 − γ ) C1t−1 /V1t−1 }. We now design an upper bound for pruning nodes in the case
where γ > 1 and C1t > 0.
After fixing variables using the above revenue thresholds, we
restrict the set of undecided variables to K̄ t and at each node t Proposition 9. [Upper bound of f t (x ) in (17)] If C1t > 0 then
after preprocessing, the subproblem will reduce to:
f t (x ) (V1t )γ −1 C1t ez2 (t ) ,
∗
γ −1
max f t (x ) = V1t + vk xk × C1t + vk ( rk − λ )xk . where
x∈{0,1}|K̄ t |
k∈K̄ t k∈K̄ t γ −1 r −λ
(15) z2∗ (t ) = vk max 0, t + k t .
V1 C1
k∈K̄
We now proceed to the development of an efficient upper
bound for function (15) at each node t. These bounds are tight See Appendix A.7 for the proof. The above upper bound can be
enough to enable efficient pruning, and tractable enough to be computed in linear time O(|K̄ t | ), if revenues rk are already sorted
quickly computed despite the high non-linearity of the objective in a pre-processing phase.
836
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Table 2
Results on instances with n = 10.
837
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Table 3
Results on instances with n = 25.
vi0 = 0, ∀i ∈ M Heuristic FP + BB
[γ , γ ]
L U
Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
vi0 0, ∀i ∈ M Heuristic FP + BB
[γ , γ ]
L U
Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
Table 4
Results on instances with n = 50.
vi0 = 0, ∀i ∈ M Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
vi0 0, ∀i ∈ M Heuristic FP + BB
[γ , γ ]
L U
Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
Table 5
Results on instances with n = 100.
vi0 = 0, ∀i ∈ M Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
vi0 0, ∀i ∈ M Heuristic FP + BB
[γ , γ ]
L U
Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
838
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Table 6
Results on instances with n = 250.
vi0 = 0, ∀i ∈ M Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. Gap(%) # Imp. Impr.(%) # Iter.
vi0 0, ∀i ∈ M Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. Gap(%) # Imp. Impr.(%) # Iter.
Table 7
Results on super large instances with vi0 = 0, ∀i ∈ M.
n = 500 Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
n = 1,0 0 0 Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
n = 5,0 0 0 Heuristic FP + BB
[γ L , γ U ] Time Obj. Time Obj. # Opt. # Imp. Impr.(%) # Iter.
parameters vi0 and γi . We also consider various intervals for the improvement in revenue obtained by FP+BB. We use Impr.(%) to
generation of dissimilarity parameter which we show by [γ L , γ U ]. demonstrate this value in the tables.
To better observe the impact of vi0 on the performance of FP+BB, For the FP+BB method, we also report for each scenario: the
we consider separate groups of instances. In all tables we solve in- number of instances (out of 20) for which the optimal solution
stances of the AOPNL by offering the best sorted-by-revenue as- was found (# Opt); the overall number of instances (out of 20)
sortments in each nest by solving the linear problem expressed for which FP+BB has improved the expected revenue found by
through formulation (8)–(10) and using FP+BB. We report the CPU Heuristic, and the average number of iterations of the FP+BB
time in seconds of each method (Time) along with the best ex- approach. Finally, for the FP+F.Enum. method we report the av-
pected revenue (Obj.) found by each method. For each combination erage CPU time in seconds (Time).
of parameters (each row), that we call “scenario” in what follows, In Table 2, we report the results for instances with n = 10.
we solve 20 instances. For each scenario, the average percentage of As these instances are small, we are able to determine the op-
improvement in the expected revenue using FP+BB is determined timal solution for the subproblem (11) and consequently, for the
as: original AOPNL (3), by generating all 2n possible assortments for
1
2 BEST p − HEUR p each nest. As a result, in Table 2, we report the CPU time of the
Impr.(% ) = 0100 × , FP+F.Enum. method used to generate all possible assortments
20 HEUR p
p=1 when solving the subproblem, instead of using the specialized
where p = 1, . . . , 20 denotes each instance of the considered sce- branch-and-bound.
nario, HEUR p is the expected revenue obtained by offering sorted- The results in this table show that our FP+BB method per-
by-revenue assortments for instance p and finally, BEST p is the op- forms well in terms of finding the optimal solution with a pro-
timal expected revenue found by FP+BB (or (Sbest ), if it reaches cessing time almost equal to the heuristic of Davis et al. (2014).
the time limit). This gap can also be interpreted as the average We can also observe that the heuristic exhibits significant gaps
839
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
840
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
large instances and super large instances. In both plots, the hor- A.2. Proof of Proposition 4
izontal axis shows the interval of the dissimilarity parameter of
nests. We observe that with an increase in the value of the dis- i ) if H = 0, it means that by adding all the products whose rev-
similarity parameter, the overall improvement of FP+BB over the enue is greater than or equal to λ, the second part on the right-
value found by Heuristic consistently increases, regardless of hand-side of the objective function (11) will be zero and therefore,
the value of vi0 or the number of products within each nest. Find- the value of the function (11) will be zero. This means that the
ing the optimal assortment by using our proposed FP+BB can im- lower bound is zero and removing any of these products will re-
prove the expected revenue in average by up to 63% (obtained for sult in a negative value of the objective function therefore xk = 1
[γ L , γ U ] = [1.5, 2.5] and vi0 = 0, ∀i ∈ M in Table 2) compared to of- for k ∈ K10 where K10 = {k ∈ K : rk λ}. On the other hand, since
fering sorted-by-revenue assortments (Heuristic). the lower bound is zero, adding a product with rk < λ will turn
the second part of the right-hand-side of the objective function
(11) negative, which in turn results in a negative value. Therefore,
x∗k = 0 for k ∈ K00 = {k ∈ K \ K10 : rk < λ}.
ii ) if H > 0, then there exists a subset of products with rk λ
5. Conclusion that if offered in the assortment of nest i, so the value of the ob-
jective function (14) will be positive. As a result, offering any prod-
In this paper, we studied the assortment problem when cus- uct from K00 = {k ∈ K : rk < λ} will reduce the numerator of the for-
tomer choice behavior is modeled using a nested logit model. We mulation (14) and at the same time increase the value of its de-
proposed an exact method combining fractional programming and nominator which in turn decreases the value of the fraction. There-
a tailored branch-and bound algorithm for the non-linear parame- fore, x∗k = 0 for k ∈ K00 . Now that we are left with only products
terized subproblem associated with each nest, which we proved to with revenue rk > λ, we define K = N \ K00 , rmax = max j∈N {r j } and
be NP-hard by a non-trivial reduction from the Subset-Sum prob- calculate the partial derivative of function (11). We have:
lem. Our method enables one to find an optimal assortment re- γ −2
gardless of the value of the dissimilarity parameter or the attrac- ∂ f 0 (x )
= ( γ − 1 ) v0 + vjxj vk v j (r j − λ )x j − λv0
tiveness of the no-purchase option within a nest, i.e. any mix of ∂ xk
j∈K j∈K
the values of these parameters can be handled by the algorithm. γ −1
Computational results reveal that finding the optimal assortment
+ v0 + vjxj vk ( rk − λ )
can have a significant impact on the expected revenue compared to j∈K
the benchmark heuristic in the literature (Davis et al., 2014), which γ −1
is based on sorted-by-revenue assortments (this heuristic being
= vk v0 + vjxj
optimal in one particular case). The highest revenue improvement j∈K
(63% on average) of the optimal assortment vs the heuristic so- ⎛
v j (r j − λ )x j − λv0 ⎞
lution was obtained for problems with 10 products per nest and
× ⎝ (γ − 1 ) + ( r k − λ )⎠
j∈K
dissimilarity parameters between 1.5 and 2.5. As the number of v0 + vjxj
products increases, the improvement made was modest though j∈K
γ −1
(at most 3% for our large instances). However, for some instances
without the no-purchase option, our exact method could solve to = vk v0 + vjxj
optimality problems with up to 50 0 0 products in less than one j∈K
minute and half, and was generally fast for all scenarios tested. Our v j (r j − λ )x j − λv0 + v0 (rmax − λ ) − v0 (rmax − λ )
j∈K
method also enables us to assess the empirical performance of the × (γ − 1 )
v0 + vjxj
sorted-by-revenue heuristic which can be a good compromise for a j∈K
huge number of products and dissimilarity parameters higher than
but close to one. + ( rk − λ )
An exciting direction for future research could be to modify our
γ −1
method to solve problems with capacity or cardinality constraints.
An AOPNL with both capacity and cardinality constraint for each vk v0 + vjxj
j∈K
particular nest in which all dissimilarity parameters are at most
one has been studied in the research of Gallego and Topaloglu (rmax − λ ) v j x j + v0 (rmax − λ ) − λv0 − v0 (rmax − λ )
j∈K
(2014) and Désir et al. (2014). Developing an efficient exact method × (γ − 1 )
v0 + vjxj
for a more general case in which some dissimilarity parameters are j∈K
higher than one remains open.
+ ( rk − λ )
since γ −1<0
γ −1
= vk v0 + vjxj
Appendix A. Proof of Lemmas and Propositions
j∈K
⎛ ⎛ ⎞ ⎞
A.1. Proof of Proposition 2 v r
× ⎝(γ − 1 )⎝(rmax − λ ) − ⎠ + ( r k − λ )⎠
0 max
v0 + vjxj
Since the parameterized problem has a piece-wise linear struc- j∈K
841
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
since rmax − λ > 0 We now have two cases. If 0, we can transform the expres-
⎛ ⎞
sion denoted by ( ) as:
v0 rmax ⎠ γ −1
0 if rk λ + (1 − γ )⎝rmax − λ −
v0 + vj
j∈K
vk V10 + v jx j
j∈K
⎛ ⎛ ⎞ ⎞
The function is increasing in xk for k ∈ K10 = {k ∈ N \ K00 : rk
⎝(γ − 1 )⎝(rmin − λ ) + C10 (
− V10 rmin − λ) ⎠
λ + (1 − γ )(rmax − λ − (v0 rmax )/(v0 + j∈K v j ))}, then for these × + (rk − λ )⎠
∗
products k we have xk = 1. V10 + j v
j∈K
iii ) if H < 0, in this case, there exists no positive value for the
objective function. In other words, for any vector x, f 0 (x ) < 0. In since γ −1<0
⎛ ⎞
this case, since the value of the fraction on the right-hand-side of C10 + (rmin − λ ) vj
(14) is always negative, adding products with rk λ will always 0 if rk λ + (1 − γ )⎝
j∈K
⎠
make the numerator less negative and at the same time, increases V10 + vj
the value of the denominator which in turn results in an overall j∈K
less negative value of the objective function (14). Therefore, define On the other hand, if < 0, we transform ( ) as follows:
K10 = {k ∈ N : rk λ} then x∗k = 1, ∀k ∈ K10 . γ −1
Defining C10 = k∈K 0 vk (rk − λ ) − λv0 , V10 = k∈K 0 vk + v0 and
1 1 vk V10 + v jx j
K = N \ K10 , we can update the objective function (11) with the re- j∈K
maining of the variables as:
C10 − V10 (rmin − λ )
× (γ − 1 ) (rmin − λ ) + 0
+ ( rk − λ )
γ −1 V1
since γ −1<0
max f 0 (x ) = V10 + vk xk × C10 + vk ( rk − λ )xk
x∈{0,1}|K | C10
k∈K k∈K
0 if rk λ + ( 1 − γ )
V10
We define rmin = min j∈K {r j } and = C10 − V10 (rmin − λ ), calcu- Therefore, if 0, x∗k = 0 for k ∈ K00 = {k ∈ K : rk
lating the partial derivative of the function above with respect of C10 + (rmin −λ ) j∈K vj
variable xk , we have: λ + (1 − γ )
V10 + j∈K v j
if < 0, then x∗k = 0
, and
for k ∈ K00 = {k ∈ K : rk λ + (1 − γ ) C10 /V10 .
γ −2
∂ f 0 (x )
= (γ − 1 ) V10 + vjxj vk C10 + v j ( r j − λ )x j A.3. Proof of Proposition 5
∂ xk
j∈K j∈K
γ −1 i ) We set rmax = max j∈K̄ t−1 {r j }. Calculating the partial derivative
+ V10 + vjxj vk ( rk − λ ) of the function (13), we have:
j∈K γ −2
γ −1 ∂ f t (x )
= (γ − 1 ) V1t−1 + vjxj vk C1t−1 + v j ( r j − λ )x j
= vk V10 + vjxj ∂ xk t−1 t−1
j∈K̄ j∈K̄
j∈K γ −1
⎛ ⎞
C10 + v j ( r j − λ )x j + V1t−1 + vjxj vk ( rk − λ )
⎜ j∈K ⎟
× ⎝ (γ − 1 ) + ( r k − λ )⎠ j∈K̄ t−1
V10 + vjxj γ −1
j∈K
γ −1 = vk V1t−1 + vjxj
j∈K̄ t−1
vk V10 + vjxj ⎛ ⎞
j∈K C1t−1 + v j ( r j − λ )x j
⎛ ⎞ ⎜ j∈K̄ t−1 ⎟
C10 + (rmin − λ ) vjxj × ⎝ (γ − 1 ) + ( r k − λ )⎠
⎜ j∈K ⎟ V1t−1 + vjxj
× ⎝ (γ − 1 ) + ( r k − λ )⎠ j∈K̄ t−1
V10 + vjxj γ −1
j∈K
γ −1 = vk V1t−1 + vjxj
j∈K̄ t−1
= vk V10 + vjxj
j∈K C1
t−1
+ v j (r j − λ )x j + V1t−1 (rmax − λ )
j∈K̄ t−1
(rmin − λ ) v j x j + V1
0
(rmin − λ ) + C10 − V10 (rmin − λ ) × (γ − 1 )
j∈K V1t−1 + vjxj
× (γ − 1 ) j∈K̄ t−1
V10 + vjxj
j∈K V1t−1 (rmax − λ )
− + ( rk − λ )
V1t−1 + vjxj
+ ( rk − λ ) j∈K̄ t−1
γ −1 γ −1
= vk V1t−1 + vjxj
= vk V10 + vjxj
j∈K̄ t−1
j∈K
⎛ ⎛ ⎞ ⎞
t−1
+ (rmax − λ ) v j x j + V1t−1 (rmax − λ )
C1
⎝(γ − 1 )⎝(rmin − λ ) + C10 − V10 (rmin − λ ) ⎠ j∈K̄ t−1
× + (rk − λ )⎠ ( ) × (γ − 1 )
V10 + vjxj V1t−1 + vjxj
j∈K j∈K̄ t−1
842
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
V1t−1 (rmax − λ ) We note the coefficients inside the above exponential function
− + ( rk − λ )
V1t−1 + vjxj as
j∈K̄ t−1 ⎛ ⎞
γ −1
2 (γ − 1 ) (r − λ ) ⎠
= vk V1t−1 + vjxj wtk = vk ⎝ + k t
2V1t + vk C1
j∈K̄ t−1
⎛ ⎛ ⎞ ⎞ k∈K̄ t
Therefore, for k ∈ K1t = {k ∈ K̄ t−1 : rk λ + (1 −
C1t−1 +(rmax −λ ) v
γ )( V1t−1 +
j∈K̄ t−1 j
v
), x∗k = 1. A.5. Proof of Proposition 7
j∈K̄ t−1 j
ii ) The proof of this section is similar to that section iii in the
i ) the proof of this part is similar to that of Proposition 4.
proof of Proposition 4.
ii ) The first part holds because for any S where k ∈ S, the ob-
jective of S ∪ {k} is larger than that of S if rk − λ 0. We can then
A.4. Proof of Proposition 6 include in the assortment all variables k such that rk λ. We also
calculate V1t = k∈K t vk + v0 and C1t = k∈K t vk (rk − λ ) − λv0 and
1 1
In this case, we can rewrite the objective function (15) as: define K = N \ K10 . The subproblem then can be expressed as:
⎛ ⎞γ −1 ⎛ ⎞
vk xk vk ( rk − λ )x k γ −1
f t (x ) = (V1t )γ −1C1t ⎝1 + ⎠ × ⎝1 + ⎠
k∈K̄ t k∈K̄ t
V1t C1t max f (x ) =
0
V10 + vk xk × C10 + vk ( rk − λ )xk
x∈{0,1}|K |
k∈K k∈K
⎛ ⎛⎛ ⎞(γ −1)
vk xk (A.2)
⎜ ⎜
= (V1t )γ −1C1t exp ⎝ln ⎝⎝1 + ⎠
k∈K̄ t
V1t We compute the partial derivative of f in above function with
respect to variable xk :
⎛ ⎞⎞⎞
vk ( rk − λ )x k γ −2
× ⎝1 + k∈K̄ t ⎠⎠⎠ ∂ f 0 (x )
C1t = (γ − 1 ) V10 + v jx j vk C10 + v j ( r j − λ )x j
∂ xk j∈K j∈K
⎛ ⎛ ⎞
vk xk γ −1
= ( V1t )γ −1C1t exp ⎝(γ − 1 ) ln ⎝1 + k∈K̄ t
t
⎠
V 1
+ V10 + v jx j vk ( rk − λ )
j∈K
⎛ ⎞⎞
vk ( rk − λ )x k γ −1
+ ln ⎝1 +
k∈K̄ t ⎠⎠
C1t = vk V10 + v jx j
j∈K
⎛ ⎞ ⎛ ⎞
2 vk xk vk ( rk − λ )x k C10 + v j ( r j − λ )x j
( V1t )γ −1C1t exp ⎝(γ − 1 ) k∈K̄ t
+
k∈K̄ t ⎠
⎝ (γ − 1 ) j∈K
2V1t + vk xk C1t × + ( rk − λ )⎠
k∈K̄ t V10 + v jx j
j∈K
2τ
since ln(1 + τ ), ln(1 + τ ) γ −1
2+τ C10
τ for τ 0, rk > λ, ∀k ∈ K̄ and γ − 1 < 0 t vk V10 + v jx j (γ − 1 ) + ( rk − λ )
⎛ ⎞ j∈K
V10
2 vk xk vk ( rk − λ )x k
(V1t )γ −1C1t exp ⎝(γ − 1 ) t ⎠ v j (r j − λ ) < 0
k∈K̄ t k∈K̄ t
+ as
2V1 + vk C1t
j∈K
k∈K̄ t
γ −1
since vk xk vk
k∈K̄ t k∈K̄ t = vk V10 + v jx j (γ − 1 )(C10 /V10 ) + (rk − λ )
⎛ ⎛ ⎞ ⎞ j∈K
2 (γ − 1 ) (r − λ )
= ( V1t )γ −1C1t exp ⎝ vk ⎝ t + k t ⎠xk ⎠ 0 if rk λ − (γ − 1 )(C10 /V10 )
2V1 + vk C1
k∈K̄ t
k∈K̄ t
843
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
Therefore, if we define K00 = {k ∈ N \ K10 : rk < λ − (γ − Appendix B. Pseudo-code of the preprocessing algorithms
1 )(C10 /V10 )}, then x∗k = 0 for k ∈ K00 .
iii ) if H < 0, similar to the case when γ 1, there exists no We summarize the findings of this section in Algorithms (2)–
positive value for the objective function. Therefore, since the value (5).
of the objective function 11 is always negative, adding products
with rk < λ will always result in a lower value. B.1.
vk xk 15 x∗k = 1, ∀k ∈ K10 ;
Continue with upper bound (V1t )γ −1C1t ez1 (t ) ;
∗
5 K1t = {k ∈ K̄ t−1 : rk
in the previous child nodes. Hence, coefficients wtk can be positive
λ + (1 − γ )((C1t−1 + (rmax − λ ) j∈K̄ t−1 v j )(V1t−1 + j∈K̄t−1 v j )) ;
or negative. Then the solution to 6 x∗k = 1, ∀k ∈ K1t ;
Continue with upper bound (V1t )γ −1C1t ez1 (t ) ;
∗
7
else if H < 0 then // Case 2
max wtk xk 8
x∈{0,1} |K̄ t | 9 C1t−1 = k∈K t−1 vk (rk − λ ) − λv0 ;
k∈K̄ t
1
844
L. Alfandari, A. Hassanzadeh and I. Ljubić European Journal of Operational Research 291 (2021) 830–845
B.3. Davis, J. M., Topaloglu, H., & Williamson, D. P. (2016). Pricing problems under the
nested logit model with a quality consistency constraint. INFORMS Journal on
Computing, 29(1), 54–76.
Algorithm 4: Preprocessing for t = 0 when γ > 1 Désir, A., Goyal, V., & Zhang, J. (2014). Near-optimal algorithms for capacity con-
strained assortment optimization. Working paper. Optimization Online. http:
1 N ← set of all products ;
//www.optimization-online.org/DB_FILE/2013/05/3875.pdf.
2 H ← k∈K:rk λ vk (rk − λ ) − λv0 ; Dinkelbach, W. (1967). On nonlinear fractional programming. Management Science,
3 if H = 0 then // Case 0 13(7), 492–498.
4 K10 = {k ∈ N : rk λ} ; Feldman, J. B., & Topaloglu, H. (2015). Capacity constraints across nests in assort-
5 x∗k = 1, ∀k ∈ K10 ; ment optimization under the nested logit model. Operations Research, 63(4),
6 K00 = {k ∈ N \ K10 : rk < λ} ; 812–822.
7 x∗k = 0, ∀k ∈ K00 ; Flores, A., Berbeglia, G., & Hentenryck, P. V. (2019). Assortment optimization un-
8 Exit (subproblem is solved) ; der the sequential multinomial logit model. European Journal of Operational Re-
search, 273(3), 1052–1064.
9 else if H > 0 then // Case 1
Gallego, G., & Topaloglu, H. (2014). Constrained assortment optimization for the
10 K10 = {k ∈ N : rk λ} ;
nested logit model. Management Science, 60(10), 2583–2601.
11 x∗k = 1, ∀k ∈ K10 ; Garey, M. R., & Johnson, D. S. (1990). Computers and intractability: A guide to the
12 C10 = k∈K 0 vk (rk − λ ) − λv0 ; theory of NP-completeness. USA: W. H. Freeman & Co..
1
13 V10 = v0 + k∈K 0 vk ; Goyal, V., Levi, R., & Segev, D. (2016). Near-optimal algorithms for the assortment
1
14 K00 = {k ∈ N \ K10 : rk < λ − (γ − 1 )(C10 /V10 )} ; planning problem under dynamic substitution and stochastic demand. Opera-
tions Research, 64(1), 219–235.
15 x∗k = 0, ∀k ∈ K00 ;
Klau, G. W., Ljubić, I., Mutzel, P., Pferschy, U., & Weiskircher, R. (2003). The frac-
Continue with upper bound (V1t )γ −1C1t ez2 (t ) ;
∗
16
tional prize-collecting Steiner tree problem on trees. In G. D. Battista, & U. Zwick
17 else if H < 0 then // Case 2 (Eds.), Proceedings of the 11th annual European symposium on algorithms – ESA
18 K00 = {k ∈ N : rk < λ} ; 2003. In Lecture Notes in Computer Science: Vol. 2832 (pp. 691–702). Springer.
19 x∗k = 0, ∀k ∈ K00 ; Budapest, Hungary, September 16-19, 2003
20 Continue with branching ; Kök, A. G., Fisher, M. L., & Vaidyanathan, R. (2008). Assortment planning: Review of
literature and industry practice. In Retail supply chain management (pp. 99–153).
Springer.
Lee, B. (1999). Calling patterns and usage of residential toll service under self se-
B.4. lecting tariffs. Journal of Regulatory Economics, 16(1), 45–82.
Li, G., & Rusmevichientong, P. (2014). A greedy algorithm for the two-level nested
logit model. Operations Research Letters, 42(5), 319–324.
Algorithm 5: Preprocessing for t = 0 when γ > 1 Li, G., Rusmevichientong, P., & Topaloglu, H. (2015). The d-level nested logit
model: Assortment and price optimization problems. Operations Research, 63(2),
1 K̄ t−1 ← set of undecided variables until node t ; 325–342.
2 K1t ← set of variables fixed to 1 until node t ; Luce, R. D. (1959). Individual choice behavior: A theoretical analysis. Wiley. Republi-
3 C1t ← k∈K t vk (rk − λ ) − λv0 ; cated by Dover Publications, 2012
1
Mai, T., & Lodi, A. (2019). An algorithm for assortment optimization under paramet-
4 V1 ← v0 + k∈K t vk ;
t
1 ric discrete choice models. Available at SSRN 3370776.
5 if H > 0 then McFadden, D., et al. (1973). Conditional logit analysis of qualitative choice behavior.
6 K0t = {k ∈ K̄ t−1 : rk < λ − (γ − 1 )(C1t /V1t )} ; In P. Zarembka (Ed.), Frontiers in econometrics (pp. 105–142). New York: Aca-
7 x∗k = 0, ∀k ∈ K0t ; demic Press.
Continue with upper bound (V1t )γ −1C1t ez2 (t ) ;
∗
8 McFadden, D. (1980). Econometric models for probabilistic choice among products.
Journal of Business, S13–S29.
Megiddo, N. (1978). Combinatorial optimization with rational objective functions.
In Proceedings of the tenth annual ACM symposium on theory of computing
(pp. 1–12). ACM.
References Rusmevichientong, P., & Topaloglu, H. (2012). Robust assortment optimization in
revenue management under the multinomial logit choice model. Operations Re-
search, 60(4), 865–882.
Archetti, C., Desaulniers, G., & Speranza, M. G. (2017). Minimizing the logistic ratio
Segev, D. (2020). Approximation schemes for capacity-constrained assortment opti-
in the inventory routing problem. EURO Journal on Transportation and Logistics,
mization under the nested logit model. Available at SSRN 3553264.
6(4), 289–306.
Sethuraman, S., & Butenko, S. (2015). The maximum ratio clique problem. Computa-
Ben-Akiva, M. E., & Lerman, S. R. (1985). Discrete choice analysis: Theory and applica-
tional Management Science, 12, 197–218.
tion to travel demand: 9. MIT Press.
Talluri, K., & Ryzin, G. V. (2004). Revenue management under a general discrete
Bertsimas, D., & Vayanos, P. (2014). Data-driven learning in dynamic pricing using
choice model of consumer behavior. Management Science, 50(1), 15–33.
adaptive optimization. Optimization Online, 20. http://www.optimization-online.
Tiwari, P., & Hasegawa, H. (2004). Demand for housing in Tokyo: A discrete choice
org/DB_HTML/2014/10/4595.html.
analysis. Regional Studies, 38(1), 27–42.
Billionnet, A. (2010). Optimal selection of forest patches using integer and fractional
Train, K. E., Ben-Akiva, M., & Atherton, T. (1989). Consumption patterns and self-se-
programming. Operational Research An International Journal, 10, 1–26.
lecting tariffs. The Review of Economics and Statistics, 62–73.
Billionnet, A. (2013). Mathematical optimization ideas for biodiversity conservation.
Train, K. E., McFadden, D. L., & Ben-Akiva, M. (1987). The demand for local tele-
European Journal of Operational Research, 231(3), 514–534.
phone service: A fully discrete model of residential calling patterns and service
Börsch-Supan, A. (1990). On the compatibility of nested logit models with utility
choices. The RAND Journal of Economics, 109–123.
maximization.. Journal of Econometrics, 43(3), 373–388.
Williams, H. C. (1977). On the formation of travel demand models and economic
Chen, R., & Jiang, H. (2019). Capacitated assortment and price optimization under
evaluation measures of user benefit. Environment and Planning A, 9(3), 285–344.
the multilevel nested logit model. Operations Research Letters, 47(1), 30–35.
Yates, J., & Mackay, D. F. (2006). Discrete choice modelling of urban housing mar-
Chen, X., et al. (2020). Dynamic Assortment Planning Under Nested Logit Models.
kets: A critical review and an application. Urban Studies, 43(3), 559–581.
Production and Operations Managemen. https://doi.org/10.1111/poms.13258.
Davis, J. M., Gallego, G., & Topaloglu, H. (2014). Assortment optimization under vari-
ants of the nested logit model. Operations Research, 62(2), 250–273.
845