Professional Documents
Culture Documents
DAB Vs DAB+
DAB Vs DAB+
Introduction
The DAB system was designed in the 1980s, and because the technologies it uses are so old
-- the technologies are unchanged to this day -- DAB is a very inefficient system by today's
standards.
In 2003, new systems emerged that had been designed to enable mobile TV, such as DVB-H
and T-DMB, and these systems could also carry digital radio. However, crucially, these
systems used the AAC+ audio codec and Reed-Solomon error correction coding, and the
combination of these two technologies made DVB-H six-times as efficient as DAB and T-
DMB four-times as efficient as DAB.
Because of its inherent problems, broadcasters and governments from numerous countries
became opposed to using the old DAB system, so WorldDAB (now called WorldDMB) was
forced to upgrade the old DAB system or risk seeing the UK, Denmark and Norway stranded
using the old DAB system while all other countries would have chosen a more modern
system instead. The upgrade they came up with is called 'DAB+', and this page compares the
technologies used on DAB with those used on the new DAB+ system.
The following table shows the scores achieved in listening tests for MP2, AAC and AAC+:
Group 1 contains important information for things like synchronisation and audio stream
information; group 2 contains the scale factors, which scale the subband samples (these are
form the exponent of a crude floating point number system); group 3 contains the subband
samples (these form the mantissa of a crude floating point number system to go with the scale
factors); and group 4 consists of the PAD (programme associated data) and scale factors CRC
(cylic redundancy check).
The Rc values quoted in the figure are those used for 128 kbps using Protection Level 3
(PL3), which is by far the most widely used Protection Level.
Although from the figure you might at first glance think that the main problem would be with
the sub-band samples, because they have the have the weakest error protection, the main
problem is the insufficient protection of the scale factors.
The scale factors form the exponent of the crude floating point system used to encode the
subband audio samples, and any errors in these scale factors should be detected by the scale
factors' CRC check. When such errors are detected this leads to either muting or crude error
concealment techniques to be used for the affected subbands which produces the "bubbling
mud" sound that accompanies poor DAB reception quality.
This problem with the error protection of the scale factors is that DAB's error correction
coding scheme uses convolutional coding (which is not by any means a strong form of error
correction when used on its own), and the code rate used to protect the scale factors is only
8/18, or 0.44. Only using a convolutional code at a code rate of 0.44 to protect something as
crucial to the correct playback of digital audio as the scale factors are is far too weak, and it is
unsurprising that reception problems are rife on DAB.
The MP2 Robustness Myth
One thing that proponents of the old DAB system have consistently claimed is that MP2 is
somehow more robust for use on digital radio systems than other audio codecs are. This view
is typified in a comment made by Quentin Howard, the chief executive of Digital One and the
current President of WorldDMB, when he said:
"... AAC+ and WM9 [are used] in other applications and an enhanced Reed-Solomon layer of
error correction [is] available for these more fragile encoding algorithms."
The argument put forward by the proponents of the old DAB system goes as follows: Audio
codecs such as MP3, AAC and AAC+ must use extra error correction coding to protect them
whereas MP2 doesn't need any extra error correction coding to protect it on DAB, therefore
MP2 must be more error-robust than the other audio codecs.
This is simply completely false.
As discussed above, DAB uses UEP to protect MP2, and the reason it uses UEP is because
both the length of an MP2 audio frame and the groups within each audio frame are fixed, so
UEP can easily be applied. And it is only the use of UEP on DAB that makes it appear as
though MP2 is more robust than other audio codecs, when in fact it is no more robust.
The length of audio frames for MP3, AAC, AAC+ etc is not fixed, therefore it is not as easy
to use UEP with these other audio codecs -- although it is not impossible, because DRM uses
UEP to protect AAC+.
The proponents of the old DAB system are simply failing to understand what I mentioned at
the beginning of the section on Error Correction Coding, which is that it is better to use
stronger error correction coding because this allows the capacity of a multiplex to increase.
And indeed, DAB+ is using EEP (equal error protection) convolutional coding along with an
outer layer of Reed-Solomon coding, which is far stronger than the error correction coding
scheme used on DAB, and this will allow the multiplex capacity on DAB+ to increase by
about 30-40% compared to the capacity of a multiplex using the old DAB system -- unless
the broadcasters decide to greatly extend the coverage area rather than take advantage of the
increase in capacity.
So if you consider the case of MP2 and AAC being protected by identical EEP convolutional
coding along with an interleaver of infinite depth (the job of an interleaver is to make the
errors uniformly randomly distributed), the areas of the audio frame for both MP2 and AAC
will have identical protection, so the BER (bit error rate -- which is the proportion of bits that
are in error (it's not really a rate, but that's the name for it)) will be identical for both audio
codecs and the
One thing that proponents of the old DAB system have consistently claimed is that MP2 is
somehow more robust than other audio codecs. For example, one classic quote made by the
current President of WorldDAB, Quentin Howard, who is also the chief executive of the UK
national commercial DAB multiplex operator, Digital One, is as follows:
"Spurious claims from some quarters that MPEG-1 Layer2 audio is outdated or inefficient is
a failure to understand the beauty of the way the frame length of MPEG and COFDM co-
exist and the benefit of UEP which together deliver a very robust audio experience. Eureka-
147 allows for other audio coding, of course, with BSAC being used in Korea, AAC+ and
WM9 in other applications and an enhanced Reed-Solomon layer of error correction
available for these more fragile encoding algorithms."
I find it hard to put into words just how ridiculously contradictory I think that statement is,
because on the one hand it recognises the UEP makes reception quality more robust, but it
then goes on to completely ignore the benefit that UEP brings to MP2 and accuses the more
modern codecs of being "fragile".
What seemingly all the DAB supporters get wrong is that it is ONLY the UEP coding that
makes them think that MP2 is more robust than other codecs. Without UEP you have EEP —
Equal Error Protection, where the whole audio frame is protected with the same code rate, so
all sections of the audio frame are protected with equal strength — and if you protected MP2
using EEP then it would be no more robust to errors than the more modern audio codecs.
For example, say MP2 and AAC streams were both being protected by the same EEP error
correction coding, and the error correction coding failed to correct a bit error in the header
part of the audio frames for both MP2 and AAC. The audio would be disturbed on both MP2
and AAC and the listener would likely notice the disturbance.
It is ONLY the use of UEP that makes the DAB supporters think that MP2 is more
robust than other codecs, and if MP2 were protected by EEP it would be no more
robust to errors than any other audio codec.