Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

The Peak Sidelobe Level of Binary Sequences

A classical problem of digital sequence design, first


studied in the 1950s but still not well understood, is to
determine those binary sequences whose non-trivial
aperiodic autocorrelations are collectively small
according to some suitable measure. The two principal
such measures are the peak sidelobe level (PSL), which
is the largest of their magnitudes, and the merit
factor, which is based on their sum of squares. It has
been known since 1968 that the PSL of almost all
binary sequences of length n grows asymptotically no
faster than order √(n log n). In 2007, Denis Dmitriev
and I foundnumerical evidence that √(n log n) is the
exact order of growth for almost all binary sequences;
our experimental conclusion was proved by shortly
afterwards by Alon, Litsyn and Shpunt.

Is there a set of binary sequences whose PSL grows


asymptotically more slowly than √(n log n)? The radar
literature contains frequent assertions that the
asymptotic PSL of m-sequences (also known as maximal
length shift register sequences) grows no faster
than order √n. In 2006, Kayo Yoshida and I carried out
a historical and numerical investigation of this claim,
concluding that it was not supported by theory or by
data. In 2007, Denis Dmitriev and I discovered
an algorithm that extends the range of exhaustive
calculation for m-sequences, giving results up to
sequence length 2 –1. This provides the first numerical
25

evidence that the PSL of almost all m-sequences


actually grows exactly like order √n, and suggests that
it will be advantageous to study the maximum PSL over
all cyclic rotations of an m-sequence.

 
The growth, relative to √n, of the maximum PSL over
all cyclic rotations of an m-sequence of length n.
The Merit Factor Problem

The merit factor is an important measure of the


collective smallness of the aperiodic autocorrelations
of a binary sequence (the other principal measure being
the peak sidelobe level). The problem of determining
the best value of the merit factor of long binary
sequences has resisted decades of attack by
mathematicians and communications engineers. In
equivalent guise, the determination of the best
asymptotic merit factor is an unsolved problem in
complex analysis proposed by Littlewood in the 1960s
that was studied along largely independent lines for
more than twenty years. The same problem is also
studied in theoretical physics and theoretical
chemistry as a notoriously difficult combinatorial
optimisation problem. My 2005survey paper traces the
historical development of the merit factor problem,
bringing together results from the various disciplines.

It was established in 1988 that there are infinite


families of binary sequences whose asymptotic merit
factor attains the value six. Since then no-one has
succeeded in finding a set of sequences whose
asymptotic merit factor exceeds six. Golay, who coined
the term “merit factor”, speculated that even if such a
set exists it might never be found, even numerically.
But in 2004, Peter Borwein, Stephen Choi and I
constructed binary sequences whose merit factor
consistently exceeds the value 6.34, for sequence
lengths up to several million. Although no-one has yet
shown that the merit factor remains above 6.34 when
this construction is applied to arbitrarily long
sequences, Kai-Uwe Schmidt and I proved in 2010
that a similar construction applied to m-sequences
increases the asymptotic merit factor from 3 to
greater than 3.34.
Definition
Formally, given complex-valued functions f and g of a natural number variable n, one writes

to express the fact that

and f and g are called asymptotically equivalent as n → ∞. This defines an equivalence


relation on the set of functions being nonzero for all n large enough. Alternatively, a more
general definition is that

using little-o notation, which defines an equivalence relation on all functions. In each


case, the equivalence class of f informally consists of all functions g which "behave like" f,
in the limit. Here, o(1) stands for some function of n whose value tends to 0 as n → ∞; in
general o(h(n)) stands for some function k(n) such that k(n)/h(n) tends to 0 as n → ∞.

Big O notation (also known as Landau notation or asymptotic notation) has been


developed to provide a convenient language for the handling of statements about order
of growth and is now ubiquitous in the analysis of algorithms. The asymptotic point of
view is basic in computer science, where the question is typically how to describe the
resource implication of scaling-up the size of a computational problem.

[edit]Asymptotic expansion
An asymptotic expansion of a function f(x) is in practice an expression of that function in
terms of a series, the partial sums of which do not necessarily converge, but such that
taking any initial partial sum provides an asymptotic formula for f. The idea is that
successive terms provide a more and more accurate description of the order of growth
of f. An example isStirling's approximation.

In symbols, it means we have

but also
and

for each fixed k, while some limit is taken, usually with the requirement
that gk+1 = o(gk), which means the (gk) form an asymptotic scale. The
requirement that the successive sums improve the approximation may
then be expressed as 

In case the asymptotic expansion does not converge, for any particular
value of the argument there will be a particular partial sum which
provides the best approximation and adding additional terms will
decrease the accuracy. However, this optimal partial sum will usually
have more terms as the argument approaches the limit value.

Asymptotic expansions typically arise in the approximation of certain


integrals (Laplace's method, saddle-point method, method of steepest
descent) or in the approximation of probability distributions (Edgeworth
series). The famous Feynman graphs in quantum field theory are
another example of asymptotic expansions which often do not converge.

You might also like