Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Exact Quantum Optimization

Agenda

1. Quantum Phase Estimation (QPE)


2. A Quantum Algorithm for solving Linear Systems of Equations (HHL)

Summer 22 Quantum Computing Programming – Quantum Approximate Optimization 2


The road ahead

• In the first part of this lecture, we will investigate if it’s possible to extend Grover’s algorithm to
enable the efficient search of solutions in the context of optimization problems
• After that, we will use the subroutine discovered in the first part (QPE) to build a quantum
algorithm to solve linear systems of equations exponentially faster than classically possible under
certain conditions

Summer 22 Quantum Computing Programming – Quantum Approximate Optimization 3


Quantum Phase Estimation (QPE)
Solving optimization problems with amplitude amplification
Exploring the Search Space of Optimization problems

• We already know how to search in the state space of a qubit register using amplitude amplification
based on the simplest possible oracle discriminating between “good” and “bad” solutions
• In order to use this idea for optimization, we need an oracle that yields the associated function value
𝑓 𝑥 for a given solution 𝑥
⊗𝑛
• In terms of unitary operations, this would correspond to 𝑈ȁ0ۧ ȁ𝑥 ۧ = ȁ𝑓 𝑥 ۧȁ𝑥 ۧ, where ȁ𝑥 ۧ represents
the solution (w.l.o.g. a bitstring) and ȁ𝑓 𝑥 ۧ corresponds to a floating point bitstring of arbitrary, but
fixed length 𝑛
• Clearly, 𝑈 must encode a mapping from all possible solutions to their respective energies
• Recalling from the last lectures, we already know a mathematical object that does exactly that: The
final Hamiltonian 𝐻 ෡𝐹 corresponding to an optimization problem formulated in the Ising model
• Taking a closer look at the matrix representation of 𝐻෡𝐹 we can see, that it has the form of
𝜆1 ⋯ 0
⋮ ⋱ ⋮ with 𝜆1 corresponding to the energy of the first computational basis vector ȁ0ۧ⊗𝑛 and
0 ⋯ 𝜆2𝑛
⊗𝑛
so on until 𝜆2𝑛 , which corresponds to the energy of the last basis vector ȁ1ۧ
• Looking at the implementation of the corresponding unitary of 𝐻 ෡𝐹 , i.e., 𝑈𝑓 ≔ 𝑒 2𝜋𝑖𝐻෡𝐹 (where we chose
𝑡 = −2𝜋 for reasons we will see later), we can see, that it has the form diag 𝑒 2𝜋𝑖𝜆1 , … , 𝑒 2𝜋𝑖𝜆2𝑛

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 5


Extracting the phases of eigenvalues of unitary operations
⊗𝑛
• Therefore, the task of finding a unitary 𝑈ȁ0ۧ ȁ𝑥 ۧ = ȁ𝑓 𝑥 ۧȁ𝑥 ۧ can be reduced to extracting 𝜆𝑖 from 𝑈𝑓
for a given eigenstate ȁ𝜑𝑖 ۧ
• The choice of 𝑡 = −2𝜋 for the implementation of 𝐻 ෡𝐹 works perfectly iff the values of the solutions of
the given optimization problem 𝜆1 , … , 𝜆2𝑛 are restricted to be in the interval of 0,1 (which can be
easily encoded in binary floats – and therefore also the reason for this specific choice of 𝑡)
• While this “normalization” is usually not the case for arbitrary problems, preprocessing routines exist,
that allow a suitable rescaling of 𝐻
෡𝐹 , e.g., for the Max-Cut problem we know exactly the highest and
lowest possible eigenvalues and can thus rescale 𝐻 ෡𝐹 accordingly
• As it’s possible for the Max-Cut problem, every problem in NP can be reduced to the solution for
Max-Cut, as Max-Cut is NP-hard, generalizing this approach
• Assuming a suitable scaling for 𝐻෡𝐹 , we are now curious on how the extraction of the phases 𝜆𝑖 of the
eigenvalues 𝑒 2𝜋𝑖𝜆𝑖 associated with a given eigenstate ȁ𝜑𝑖 ۧ of the unitary matrix 𝑈𝑓 can be done
• Remembering phase kickback from the third lecture, we can see, that we can in fact extract the phase
of a unitary operation easily, executing it in a controlled manner:

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 6


Transforming local phases into measurable output

• For the easy case of 𝜆𝑖 being either 0 or 1Τ2, we can see that the following circuit yields exactly what
we want:

𝜆𝑖 = 0.5 𝜆𝑖 = 0

• This can be visualized in the equator of the Bloch sphere for the control qubit (initialized in ȁ−ۧ ȁ+ۧ
the ȁ+ۧ state), where we can observe two possible outcomes of the phase kickback:
• Expanding to the more complicated case of two-bit precision (i.e., allowing 𝜆𝑖 ∈ 0, 0.25, 0.5, 0.75 ), we
need two output qubits to encode 𝜆𝑖
• To find the least significant bit 𝜙2 for 𝜆𝑖 = 0. 𝜙1 𝜙2 we can see that for 𝜙2 = 1, either a quarter rotation
or a three-quarter rotation is executed when using phase kickback, for 𝜙2 = 0, either no rotation or a
half rotation is executed (using the notation of binary fractions): 𝜆𝑖 = 0.01
• With a little creativity, we can see that we can discriminate between
𝜙2 = 0 and 𝜙2 = 1 by using phase kickback twice, as the rotation then 𝜆𝑖 = 0.10 𝜆𝑖 = 0.00
ends up on the left for 𝜙2 = 1 and on the right for 𝜙2 = 0
𝜆𝑖 = 0.11

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 7


Increasing the precision in the phase estimation algorithm

• Generalizing this concept, we can conclude that the least significant bit 𝜙𝑚 of 𝜆𝑖 = 0. 𝜙1 𝜙2 ⋯ 𝜙𝑚 can
be determined with the following circuit:

• Before generalizing the whole circuit to arbitrary precision (i.e., large 𝑚), we first need to explore on
how to find out about 𝜙1 in the case of 𝜆𝑖 = 0. 𝜙1 𝜙2
• As we already can compute 𝜙2 , we can use it to compute 𝜙1 , by reducing the standard (single) phase
kickback to the left and right rotation
state, i.e., when 𝜙2 = 1, rotating back
a quarter rotation:
𝜆𝑖 = 0. 01

𝜆𝑖 = 0. 10 𝜆𝑖 = 0. 00

𝜆𝑖 = 0. 11
Summer 22 Quantum Computing Programming – Exact Quantum Optimization 8
Enabling arbitrary precision in the phase estimation algorithm

With this concept in mind, we can generalize to the case of 𝜆𝑖 = 0. 𝜙1 𝜙2 𝜙3 and beyond:

As we can see, the final part of the circuit basically does a basis transformation from the “rotation” basis
(which is commonly called Fourier basis) to the computational basis:
Computational Basis Fourier Basis

https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 9


The Quantum Phase Estimation Algorithm

This concludes our efforts and constitutes the Quantum Phase Estimation Algorithm (QPE):

• In this circuit, we already generalized from our special case of 𝑈𝑓 = 𝑒 2𝜋𝑖𝐻෡𝐹 to arbitrary unitary matrices
𝑈 with associated eigenvalues 𝑒 2𝜋𝑖𝜆𝑖 (and the phase 𝜆𝑖 = 0. 𝜙1 𝜙2 ⋯ 𝜙𝑚 ) for eigenvectors ȁ𝜑𝑖 ۧ
• By convention, the transformation from the computational basis into the Fourier basis is called 𝑄𝐹𝑇,
such that we need to use its inverse 𝑄𝐹𝑇 †
• As 𝜆𝑖 might not be representable as a binary fraction, the output might not exactly match 𝜆𝑖
• However, the output will be the best possible 𝑚-bit approximation with the sufficient probability of
4Τ𝜋 2 ≈ 40%, as can be shown by calculation

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 10


Combining Amplitude Amplification with the QPE for Optimization

Being able to convert from the solution space to the solution value space with the QPE algorithm, we
can now formulate an optimization algorithm based on Amplitude Amplification:

• For the initialization step, we exploited linearity for the QPE by putting in a superposition of all
⊗𝑛
possible eigenvectors, i.e., ȁ+ۧ = σ𝑖 𝛼𝑖 ȁ𝜑𝑖 ۧ for some amplitudes 𝛼𝑖 ∈ ℂ
• In the presented optimization algorithm, 𝑈𝑔 can be chosen as a unitary representing a binary fuction
yielding 1, iff a set of entries of 𝜙1 ⋯ 𝜙𝑗 of ȁ𝜙1 𝜙2 ⋯ 𝜙𝑚 ۧ is 0 ⋯ 0, meaning that 𝜆𝑖 is smaller than 2−𝑗
• In the circuit above, we used the notation 𝑄𝑃𝐸 to indicate swapped in- and outputs:

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 11


A Quantum Algorithm for solving
Linear Systems of Equations (HHL)
Finally, exponential speedup for practical problems
Idea: Use QPE’s eigendecomposition to solve LSEs

• As we have seen already, the highly relevant problem of convex optimization (for w.l.o.g. quadratic
problems) can be solved exactly by means of solving a (potentially very large) linear system of
equations LSEs 𝐴𝑥 = 𝑏 where 𝐴 ∈ ℂ𝑁×𝑁 and 𝑥, 𝑏 ∈ ℂ𝑁 (usually, problems reside in the real numbers)
• The most straight forward way to solve for 𝑥, is inverting 𝐴 and multiplying it on both sides of the
equation: 𝐴−1 𝐴 𝑥 = 𝐴−1 𝑏 ⟺ 𝑥 = 𝐴−1 𝑏 (given that 𝐴 is invertible)
• As we usually deal with diagonalizable matrices 𝐴 in practice (i.e., ∃ 𝑄, Λ ∈ ℂ𝑁×𝑁 ∶ 𝐴 = 𝑄 Λ 𝑄−1 , where 𝑄
is a matrix whose 𝑖-th column is the 𝑖-th eigenvector ȁ𝑒𝑖 ۧ of 𝐴 and Λ is a diagonal matrix where the 𝑖-th
diagonal element is the eigenvalue 𝜆i corresponding to the 𝑖-th eigenvector), we can see:
𝐴−1 = (𝑄 Λ 𝑄 −1 )−1 = 𝑄−1 −1 Λ−1 𝑄−1 = 𝑄 Λ−1 𝑄−1
• As diagonal matrices are extremely easy to invert (i.e., Λ−1 = diag 1Τ𝜆1 , … , 1Τ𝜆𝑛 ), an efficient
eigendecomposition allows for an efficient inversion
• Conveniently, as we know already, Hermitian matrices are diagonalizable, with real-valued eigenvalues
• For arbitrary matrices, we can use a trick to “make them Hermitian” (and thus accessible to the QC):
0 𝐴 0 𝑏
Instead of solving 𝐴𝑥 = 𝑏, we solve 𝐴′ 𝑥 ′ = † = = 𝑏′, where 𝐴′ is guaranteed to be
𝐴 0 𝑥 0
Hermitian and the solution for 𝑥 can be easily extracted (thus we assume 𝐴 to be Hermitian)
• Just like before, we will also assume a sufficient scaling of 𝐴 for all eigenvalues to be between 0 and 1
(or between −0.5 and 0.5, if negative eigenvalues are present) such that all of them to be unique

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 13


Excursion: Existence of negative eigenvalues

Scaling the eigenvalues further down to be in the interval (−0.5, 0.5) results in the fact that positive
eigenvalues lie on upper half of the unit circle, while negative eigenvalues lie on the lower half:

1
1 2𝜋𝑖8
𝜙= at 𝑒
4
• QPE then computes the eigenvalues in two‘s
complement representation:
1
• 𝜙 = 4 ↦ ȁ001ۧ
1
• 𝜙 = − 4 ↦ 111
• Bitflip: ȁ000ۧ
• +1: 001
1
1 −2𝜋𝑖 8
𝜙=− at 𝑒
4

Note: As this corresponds to small changes in implementation details, we will assume the easy case of
all eigenvalues being in the “easy” 0,1 interval for notational simplicity in the following.

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 14


Sketching the algorithm

• The general idea is thus to compute the eigendecomposition of 𝐴 for its action on 𝑏, where use the
2
encoding ȁ𝑏ۧ = 1ൗ σ𝑁
𝑗 =1 𝑏𝑗 ⋅ 𝑏1 , … , 𝑏𝑁 𝑇
(in the specific case of convex opt., 𝑏 is zero anyway).

• With this, we can think of ȁ𝑏ۧ as a superposition of eigenvectors of 𝐴, i.e.: σ𝑗 𝑘𝑗 ห𝑒𝑗 ൿ, and thus we have:
𝑁 𝑁 𝑁
⊗𝑝 ⊗𝑝 ⊗𝑝
𝑄𝑃𝐸 𝑏 ⊗ 0 = 𝑄𝑃𝐸 ෍ 𝑘𝑗 𝑒𝑗 ⊗ 0 =෍ 𝑘𝑗 𝑄𝑃𝐸 𝑒𝑗 ⊗ 0 =෍ 𝑘𝑗 𝑒𝑗 ⊗ ȁ𝜆𝑗 𝑡ۧ
𝑗=1 𝑗=1 𝑗=1
• If we can now invert the eigenvalues properly, we are done
• While this isn’t too hard in theory, many arithmetic gate operations are need, as we will see later:

1
ȁ𝑏ۧ 𝐶𝑘𝑗 2 ȁ𝑥ۧ
σ𝑁
𝑖=1 𝜆𝑗 𝑡

HHL Circuit

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 15


Computing reciprocals

• As we have computed information on the eigenpairs of 𝐴 we (more or less) found the


eigendecomposition of 𝐴
• As the eigenvalues are stored in a register, we can simply invert them by implementing an arithmetic
circuit for integer division
𝑁 𝑁 1
෍ 𝑘𝑗 𝑒𝑗 ⊗ ȁ𝜆𝑗 𝑡ۧ ↦ ෍ 𝑘𝑗 𝑒𝑗 ⊗
𝑗=1 𝑗=1 𝜆𝑗 𝑡
• While we already know how to map arbitrary classical (arithmetic) circuits to a quantum circuit, this
often leads to lengthy gate operations in terms of the absolute number of gate operations:
Circuit for integer division

arXiv:1809.09732v1

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 16


Preliminary Consideration: Embedding non-unitary transformations

• As the solution to the given LSE might not be a normalized vector, and the operation changing ȁ𝑏ۧ to
ȁ𝑥 ۧ is not unitary in general
• This problem can be solved by enlarging the Hilbert space using an ancilla qubit:
• In this subspace, the ancilla qubit is in a superposition of state ȁ1ۧ for the solution of the LSE and state
ȁ0ۧ for ensuring unitarity
• If thus measure this ancilla qubit to be in state 1 , we get our solution (scaled for necessary
normalization) – for an example, consider ȁ𝑥ۧ = 1Τ4 , 1Τ4 𝑇 and a suitable Unitary 𝑈, we can see:
7Τ4 ⋅ ⋅ ⋅
7Τ4 ⋅ ⋅ ⋅ 00 = 7 00 + 7 01 + 1 10 + 1 11
1Τ4 ⋅ ⋅ ⋅ 4 4 4 4
1Τ4 ⋅ ⋅ ⋅
• Clearly, the subspace where the first qubit is in state ȁ1ۧ encodes the solution of the LSE
• Measuring the first qubit to be ȁ1ۧ gives us an “automatically” normalized quantum state proportional
to the exact solution
1 1 1 1 1
10 + 11 = 10 + ȁ11ۧ
2 2 4 4 2 2
1 1
+
4 4

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 17


Setting amplitudes on a qubit depending on a parameter

As it’s in principle clear how to invert the eigenvalues in the bit string encoded representation, we now
need to use this representation to execute our many objective: Changing the amplitudes of 𝑏 from
1 1
𝑘𝑗 𝑒𝑗 to 𝑘𝑗 𝜆 𝑒𝑗 , as 𝑥 = 𝐴−1 𝑏 = 𝐴−1 σ𝑁 𝑘𝑗 𝑒𝑗 = σ𝑁
𝑘𝑗 𝐴−1
𝑒𝑗 = σ𝑁
𝑘𝑗 𝑒
𝑗
𝑗=1 𝑗=1 𝑗=1 𝜆 𝑗 𝑗

• As we have seen on the last slide, such changes in amplitudes necessitate an ancilla qubit
• Our goal is thus constructing a unitary operation that takes the information about the inverted
eigenvalues 1Τ𝜆𝑗 encoded inside a qubit register in the computational basis (for binary fractions) and
transform it into an amplitude
• As we can see, this task can be accomplished conveniently using controlled rotations 𝑅𝑦 𝜃 :
For this, recall that:
𝜃 2𝜃
cos cos cos 𝜃
2 2
• 𝑅𝑦 𝜃 0 = 𝜃
and thus 𝑅𝑦 2𝜃 0 = 2𝜃
=
sin sin sin 𝜃
2 2
• With this in mind, we can encode the information contained in the controlled bit (i.e., its state) using
cos 2arcsin(𝜃 ) 2
𝑅𝑦 2arcsin(𝜃 ) 0 = = 1 − 𝜃 , for 𝜃 ≤ 1 easily, if the control bit is preprocessed
𝜃 𝜃
with the application of an arcsin function

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 18


Rotating about values stored in multi qubit registers

• Using a rotation around the 𝑦-axis is therefore a convenient way to change/set the amplitudes of a
single qubit on some value dependent on some parameter without introducing any global phase or
relative phase
1
• As the parameter 𝜃 in our case are the values 𝜆 𝑡 which are stored in a multi qubit register, we can
𝑗
make use of the additivity of rotations around the same axis: 𝑅𝑦 𝑎 + 𝑏 = 𝑅𝑦 𝑎 𝑅𝑦 𝑏 = 𝑅𝑦 𝑏 𝑅𝑦 𝑎
This concludes our approach to be the following (where the uncomputation in end is added in order to
be able to easily reuse the qubits not encoding the needed result):

ȁ𝑏ۧ
„ȁ𝑥ۧ“

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 19


For the sake of completeness: The Math
1
σ𝑁
𝑗=1 𝑘𝑗 𝑢𝑗 ⊗ ȁ 𝜆 𝑡ۧ
𝑗
𝐶 𝜆𝑚𝑖𝑛 𝑡
𝑗=1 𝑘𝑗 𝑢𝑗 ⊗ ȁ 𝜆 𝑡ۧ, where C ≤ 𝜆𝑚𝑖𝑛 𝑡 because ∀𝑗:
↦ σ𝑁 𝜆𝑗 𝑡
≤1
𝑗

𝐶
↦ σ𝑁
𝑗=1 𝑘𝑗 𝑢𝑗 ⊗ ȁ2 arcsin ۧ
𝜆𝑗 𝑡
𝐶
↦ 0 ⊗ σ𝑁
𝑗=1 𝑘𝑗 𝑢𝑗 ⊗ 2 arcsin 𝜆𝑗 𝑡

2
𝐶 𝐶 𝐶
↦ 1− 0 ⊗ … +𝜆 1 ⊗ σ𝑁
𝑗=1 𝑘𝑗 𝑢𝑗 ⊗ 2 arcsin
𝜆𝑗 𝑡 𝑗𝑡 𝜆𝑗 𝑡

2
𝐶 𝐶 k𝑗 𝐶
= 1− 0 ⊗ … + 𝑡 ȁ1ۧ ⊗ σ𝑁
𝑗=1 𝜆 𝑢𝑗 ⊗ ȁ2 arcsin ۧ
𝜆𝑗 𝑡 𝑗 𝜆𝑗 𝑡

2
𝐶 𝐶 k𝑗
↦ 1− 0 ⊗ … + 𝑡 ȁ1ۧ ⊗ σ𝑁
𝑗=1 𝜆 𝑢𝑗 ⊗ 0 ⊗𝑝 by uncomputation
𝜆𝑗 𝑡 𝑗

2
𝐶 𝐶
= 1− 0 ⊗ … +𝑡 1 ⊗ 𝑥 ⊗ 0 ⊗𝑝 as the final end result
𝜆𝑗 𝑡

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 20


Details: What about the arcsin?

For designing an arithmetic circuit for evaluating the 𝑎𝑟𝑐𝑠𝑖𝑛 function, we can use its series
representation:
∞ 2𝑛 − 1 ‼ 𝑥 2𝑛+1 1 𝑥3 1 ⋅ 3 𝑥5 1 ⋅ 3 ⋅ 5 𝑥7
arcsin 𝑥 = ෍ =𝑥+ + + +⋯
𝑛=0 (2𝑛)‼ 2𝑛 + 1 2 3 2⋅4 5 2⋅4⋅6 7
1 3 5 5
= 𝑥 + 𝑥3 + 𝑥 + 𝑥7 + ⋯
6 40 112
This allows for an approximate, arithmetical computation using adders and multipliers.
While this approach already results in quite a big circuit for a meager approximation for computing 𝑥 3 , it
allows for arbitrary precision in principle.
Other approaches include:
1
• The use of first order approximation: arcsin 𝑥 ≈ 𝑥 for 𝑥 ∈ −0. 5, 0.5 . For this must be scaled
𝜆𝑗 𝑡
accordingly, however
• Implementing more sophisticated approximation techniques, as in e.g. (Häner et. Al.: 2018)
arXiv:1805.12445v1

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 21


Run time: Phase Estimation

Curious about the possible speedup we now proceed with a run time analysis of solving the LSE
excluding state preparation and encoding.
• QPE scales in 𝑂(2𝑝 ), where 𝑝 is the size of the register yielding the estimate of the eigenvalues, i.e.,
the precision of the estimation
𝜆𝑚𝑎𝑥
• An important entity in this context is the condition number of the matrix 𝐴, 𝜅 𝐴 = 𝜆𝑚𝑖𝑛

• If we need to rescale 𝐴 to make all eigenvalues uniquely accessible through QPE, its size determines
the needed accuracy of the QPE
𝜆 1
• After scaling 𝜆𝑚𝑎𝑥 to be smaller than 1, 𝜆𝑚𝑖𝑛 is scaled to be smaller than 𝜆 𝑚𝑖𝑛 =
𝑚𝑎𝑥 𝜅(𝐴)
1
• In order to estimate 𝜅(𝐴), we need to choose at least 𝑝 = ⌈log 2 𝜅 𝐴 ⌉, s.t. 𝑂 2⌈log2 𝜅 𝐴 ⌉
= 𝑂(𝜅(𝐴))
• This does still not guarantee exact estimation however, because, as we remember, phase estimation
introduces errors when the phases can’t be represented by a finite amount of bits
• In order to place an upper bound on an error term 𝜀, which can be introduced by QPE use as many
1
qubits of precision 𝑝 such that 𝑝−1 ≤ 𝜀
2
Remark: In literature This error term appears in run time analyses as 𝑂(∗⋅ 1Τ𝜀)

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 22


Run time: Arithmetic Circuits

• Altough the arithmetic circuits for inverting the eigenvalues and evaluating the 𝑎𝑟𝑐𝑠𝑖𝑛-function are
quite big, they scale polynomially in 𝑝 and therefore asymptotically efficient
• Every classical circuit can be implemented efficiently on a quantum computer
• Far too big for current quantum computers

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 23


Run time: Success probability of the post-selection procedure

• As we only receive the correct result probabilistically, depending on the measure state of the ancilla
qubit, we are curious how often we actually measure it to be in state 1
𝐶 𝜆𝑚𝑖𝑛 1 𝐶 1
• For this, we can see that 𝜆𝑗 = = for 𝜆𝑗 = 𝜆𝑚𝑎𝑥 and 𝜆𝑗 ≥ 𝜅(𝐴)
𝜆𝑗 𝜅(𝐴)

• Considering the worst case, we can assume that, in the linear combination ȁ𝑏ۧ = 𝑘1 ȁ𝑢1 ۧ + 𝑘2 ȁ𝑢2 ۧ +
… + 𝑘𝑛 ȁ𝑢𝑛 ۧ, the coefficient in front of the eigenvector corresponding to the biggest eigenvalue is 1, s.t.:
2𝑛
2
෍ 𝑘𝑖 = 1 ⟹ ȁ𝑏ۧ = ȁ𝑢𝑚𝑎𝑥 ۧ
𝑖=1
• In this case, after performing the HHL algorithm, the final state is:
1
ȁ0ۧ ⊗ … + ȁ1ۧ ⊗ ȁ𝑢 ۧ
𝜅(𝐴) 𝑚𝑎𝑥
1 2 1
• Therefore, the probability of measuring the ancilla to be in the state 1 is at least =
𝜅 𝐴 𝜅 𝐴 2
• For 𝑂(𝜅 𝐴 2 ) runs, we can thus be very sure to have gotten the correct result
• This runtime can even be boosted to 𝑂(𝜅 𝐴 ) Amplitude Amplification
Concluding: For 𝑟 denoting the run time of the state preparation and matrix encoding procedure (which is
generally 𝑂(log 𝑁), for an 𝑁 dimensional LSE) the overall complexity 𝑂(𝑟 ⋅ 𝜅 𝐴 2 ), respectively

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 24


Summary – Runtime and Space

The HHL algorithm can be used to prepare a state whose amplitudes are proportional to the solution
vector of a linear system of equations 𝐴𝑥 = 𝑏 in time 𝑂(log 𝑁 ⋅ 𝜅 𝐴 2 ) for an 𝑁 dimensional LSE with an
efficiently embeddable 𝑏 (i.e., logarithmic runtime in 𝑁) and a sparse matrix 𝐴
Input parameters:
• Algorithmic description of (structured) matrix 𝐴
• Algorithmic description of a (structured) right hand side 𝑏
Registers:
• One register encoding 𝑏 – log 2 (𝑁) qubits
• One register storing computed eigenvalues – ≈ ⌈log 2 𝜅 𝐴 ⌉ qubits
• One ancilla qubit for encoding the solution vector into register encoding 𝑏
• Further ancillary registers for arithmetic circuits needing polylog(𝑁) qubits

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 25


Summary – Hyperparameters and application

For the HHL algorithm, many hyperparameters are needed:


• An upper bound on the condition number 𝜅 𝐴 for estimating the run time and the precision 𝑝 of
the eigenvalue estimation subroutine
• An upper and a lower bound on the eigenvalues of 𝐴 for guaranteeing unambiguous eigenvalue
estimation and a correct encoding of the solution into a subspace, i.e., for choosing scaling factors
𝑡 and 𝐶, respectively
As the solution is encoded in a quantum state, we are not able to read out all components of 𝑥
(quantum state tomography) without destroying the speed up. In order to maintain an advantage over
classical algorithms, however, we can use 𝑥 in order to compute a global property, i.e., a property that
classically would need to take all (or at least some) of 𝑥’s components into account.

Comparing this with classical approaches of solving LSEs, that scale at least polynomially in the
dimension 𝑁, this concludes an exponential advantage.

Remark: Often, we are not even interested in the exact solution 𝑥 (like in Support Vector Machines,
where the meta information of the classification of the regarded datapoint is all we care about).
This allows the practical application of the HHL algorithm.

Summer 22 Quantum Computing Programming – Exact Quantum Optimization 26

You might also like