Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

1

A Compressive Sensing Codec Architecture for


ECG Signals with Adaptive Quantization and
Stream Entropy Coding
Shailesh Kumar, Surendra Prasad, Senior Member, IEEE, and Brejesh Lall, Member, IEEE

Abstract—Remote monitoring of electrocardiogram (ECG) ECG signal compression has been an active area of interest
signals plays a critical role in the management of cardiovascular for several decades. Extensive surveys can be found in [6],
diseases. Current long term ECG monitoring systems generate [7]. Compressive sensing (CS) based techniques for ECG data
a large amount of data that need to be sent across a wireless
body area network. The wearable devices are resource limited. compression have been reviewed in [8], [9].
Hence a low resource consuming compression scheme is desirable. Transform domain techniques (e.g., Discrete Cosine Trans-
Compressive sensing (CS) is quite appealing as a low complexity form [10], Discrete Cosine Transform [11], [12], Discrete
method for compression of ECG data. This paper investigates Wavelet Transform [13]–[16]) are popular in ECG compres-
the distribution of the compressive measurements and proposes sion and achieve high compression ratios (CR) at clinically
a CS based codec architecture comprising simple quantization
and asymmetric numeral systems (ANS) based entropy coding acceptable quality. However, they require computationally
steps that can significantly boost the compression ratio without intensive sparsifying transforms on all data samples and are
sacrificing reconstruction performance. The encoder has low thus not suitable for WBAN sensor nodes [8].
computational complexity beyond the sensing operation and can Compressive sensing [17]–[21] uses a sub-Nyquist sampling
be implemented entirely using integer arithmetic. The quantized method by acquiring a small number of incoherent measure-
Gaussian entropy model for the compressive measurements is
estimated directly from the data and can be easily adapted ments which are sufficient to reconstruct the signal if the signal
to achieve better compression. Our decoder architecture is is sufficiently sparse in some basis. For a sparse signal x ∈ Rn ,
flexible in choosing any suitable reconstruction method. We have one would make m linear measurements where m ≪ n which
tested our encoder with bound optimization based block sparse can be mathematically represented by a sensing operation
Bayesian learning (BSBL-BO) reconstruction algorithm on MIT-
BIH Arrhythmia database. We have validated that our encoder y = Φx (1)
can achieve additional 20-30% of space savings without any
impact on reconstruction quality on top of the savings in terms where Φ ∈ Rm×n is a matrix representation of the sensing
of measurements. We have also clearly described the digitization process and y ∈ Rm the set of m measurements collected for
process for the measurements which has often been ignored
in the literature on CS based compression of ECG data. As
x. A suitable reconstruction algorithm can recovery x from y.
part of reproducible research we have open sourced our codec Ideally, the sensing process should be implemented at the
(implemented in Python with JAX based GPU acceleration for hardware level in the analog to digital conversion (ADC)
the decoder). process. However, much of the use of CS in ECG follows a
digital CS paradigm [5] where the ECG samples are acquired
first by the ADC circuit on the device and then they are
I. I NTRODUCTION
translated into incoherent measurements via the multiplication
Noncommunicable diseases (NCDs) account for 72% of all of a digital sensing matrix. These measurements are then
global deaths with cardiovascular diseases (CVDs) accounting transmitted to remote telemonitoring servers. A suitable recon-
for 44% of all NCD mortality [1]. For patients with CVDs, struction algorithm is used on the server to recover the original
wearable device based remote ECG monitoring plays a critical ECG signal from the compressive measurements. Reconstruc-
role in their disease management and care. tion algorithms for ECG signals include: greedy algorithms
In wireless body area networks (WBAN) based telemoni- [22] (simultaneous orthogonal matching pursuit), optimization
toring networks [2], the energy consumption on sensor nodes based algorithms [23], [5] (SPG-L1), Bayesian learning based
is a primary design constraint [3]. The wearable sensor nodes algorithms [24]–[26], deep learning based algorithms [27].
are often battery operated. It is necessary to reduce energy In order to keep the sensing matrix multiplication simple
consumption as much as possible. and efficient, sparse binary sensing matrices are a popular
However, long term ECG monitoring can generate a large choice [5], [24].
amount of uncompressed data. For example, each half hour
2 lead recording in the MIT-BIH Arrhythmia database [4]
requires 1.9MB of storage. As shown in [5], in a real time A. Problem Formulation
telemonitoring sensor node, the wireless transmission of data We consider the problem of efficient transmission of com-
consumes most of the energy. The real time compression of pressive measurements of ECG signals over the wireless body
ECG data by a low complexity encoder has received significant area networks under the digital compressive sensing paradigm.
attention in the past decade. Let x be an ECG signal and y be the corresponding stream
2

of compressive measurements. Our goal is to transform y into lossless data compression. zstd by Facebook Inc. [34] and
a bitstream s with as few bits as possible without losing the LZFSE by Apple Inc. are popular lossless data compression
signal reconstruction quality. formats based on ANS. To the best of our knowledge, ANS
We consider whether a digital quantization of the compres- type stream codes have not been considered in the past for the
sive measurements affects the reconstruction quality. Further, entropy coding of compressive measurements.
we study the empirical distribution of compressive measure-
ments to explore efficient ways of entropy coding of the C. Contributions
quantized measurements. In this paper, we present an adaptive quantization and
A primary constraint in our design is that the encoder entropy coding scheme for the digital compressive measure-
should avoid any floating point arithmetic so that it can be ments in the encoders running on the WBAN sensor nodes.
implemented efficiently in low power devices. The adaptive quantization scheme has been designed in a
manner so that the quantization noise is kept within a specified
B. Related Work threshold. Once the measurements are quantized and clipped,
they are ready to be entropy coded. A quantized Gaussian
The literature on the use of CS for ECG compression is
distribution of the measurements is estimated directly from
mostly focused on the design of the specific sensing matrix,
the data and the parameters for this distribution are used for
sparsifying dictionary, or reconstruction algorithm for the high
the entropy coding of the measurements. Asymmetric numeral
quality reconstruction of the ECG signal from the compressive
systems (ANS) based entropy coding is then used for efficient
measurements. To the best of our knowledge, (digital) quan-
entropy coding of the quantized and clipped measurements.
tization and entropy coding of the compressive measurements
The ECG signal is split into frames for encoding purposes.
of ECG data hasn’t received much attention in the past.
We have designed a bitstream format with a stream header to
Mamaghanian et al. [5] use a Huffman codebook which is
describe the encoding parameters and a frame header for each
deployed inside the sensor device. They don’t use any quan-
encoded frame containing frame wise quantization parameters.
tization of the measurements. However, they don’t provide
We have designed a decoder which accepts this encoded
much detail on how the codebook was designed or how should
bitstream and reconstructs the ECG signal frame by frame
it be adapted for variations in ECG signals. They clearly
in real time.
define the compression ratio in terms of a ratio between the
Our encoding scheme doesn’t require a fixed codebook for
uncompressed bits bitsu and the compressed bits bitsc . They
u −bitsc
entropy coding. The encoder has been designed in a manner
define it as bitsbits × 100. This is often known in literature
u so that all the operations can be implemented by integer arith-
as percentage space savings.
metic. We have tested our encoder on MIT-BIH Arrhythmia
Simulation based studies often send the real valued com-
database and are able to demonstrate additional space savings
pressive measurements to the decoder modules and don’t
beyond the savings by reduced number of measurements. Our
consider the issue of number of bits required to encode each
implementation is available as part of reproducible research
measurement. Zhang et al. [24], Zhang et al. [27] define
n−m on GitHub [35].
n × 100 as compression ratio. Mangia et al. [28] define
n
m as the compression ratio. Polania et al. [29] define m n
D. Paper Organization
as compression ratio. Picariello et al. [30] use a non-random
sensing matrix. They infer a circulant binary sensing matrix The rest of the paper is organized as follows. Section II
directly from the ECG signal being compressed. The sensing describes the ECG database used for the evaluation of the
matrix is adapted as and when there is significant change in codec and the performance metrics. Section III describes the
the signal statistics. They represent both the ECG samples proposed codec architecture. The analysis of the experimental
and compressive measurements with same number of bits results is covered in section IV. Section V summarizes our
per sample/measurement. However, in their case, there is the contributions and identifies future work. Appendix A provides
additional overhead of sending the sensing matrix updates. a short overview on compressive sensing. Appendix C provides
n a short overview on entropy coding.
Their compression ratio is slightly lower than m where they
n
call m as the under-sampling ratio.
Huffman codes have frequently been used in ECG data II. ECG DATABASE AND P ERFORMANCE M ETRICS
compression for non-CS methods. Luo et al. in [31] proposed We use the MIT-BIH Arrhythmia Database [4] from Phy-
a dynamic compression scheme for ECG signals which con- sioNet [36]. The database contains 48 half hour excerpts of
sisted of a digital integrate and fire sampler followed by an two channel ambulatory ECG recordings from 47 subjects.
entropy coder. They used Huffman codes for entropy coding of The recordings were digitized at 360 samples per second for
the timestamps associated with the impulse train. Chouakri et both channels with 11 bit resolution over a 10mV range.
al. in [32] compute the DWT of ECG signal by Db6 wavelet, The samples can be read in both digital (integer) form or
select coefficients using higher order statistics thresholding, as physical values (floating point) via the software provided
then perform linear prediction coding and Huffman coding of by PhysioNet. We use the integer values in our experiments
the selected coefficients. since our encoder is designed for integer arithmetic. We use
Asymmetric Numeral Systems (ANS) [33] based entropy the MLII signal (first channel) from each recording in our
coding schemes have seen much success in recent years for experiments.
3

A. Performance Metrics go up to[− nd nd


m × 1024, m × 1023]. For a simple case where
In our encoder, ECG signal is split into windows of n n = 2m and d = 4, we will require 14 bits to represent each
samples each which can be multiplied with a sensing matrix. measurement value. To achieve m n as the compression ratio,
Each window of n samples generates m measurements by the we will have to quantize the measurements in 11 bits. If we do
sensing equation y = Φx. Assume that we are encoding s so, we shall need to provide some way to communicate the
ECG samples where s = nw and w is the number of signal quantization parameters to the decoder as well as study the
windows being encoded. Let the ECG signal be sampled by impact of quantization noise. This issue seems to be ignored
the ADC device at a resolution of r bits per sample 1 . Then in [24].
the number of uncompressed bits is given by bitsu = rs. Another way of looking at the compressibility is how many
1) Compression Ratio: Let the total number of compressed bits per sample (bps) are needed on average in the compressed
bits corresponding to the s ECG samples be bitsc 2 . Then the bitstream. We define bps as:
compression ratio (CR) is defined as
bitsc
bitsu bps ≜ . (6)
CR ≜ . (2) s
bitsc
Since the entropy coder is coding the measurements rather
Percentage space saving (PSS) is defined as
than the samples directly, hence it is also useful to see how
bitsu − bitsc many bits are needed to code each measurement. We denote
PSS ≜ × 100. (3)
bitsu this as bits per measurement (bpm):
Note that often in literature, PSS is defined as compression
bitsc
ratio (e.g., [5]). Several papers ignore the bitstream formation bpm ≜ . (7)
aspect and report m n−m mw
n × 100 (e.g., [37]) or n × 100 (e.g.,
[27]) as the compression ratio which measures the reduction in 2) Reconstruction Quality: The normalized root mean
number of measurements compared to the number of samples square error is defined as
in each window. We shall call this metric as percentage
measurement saving (PMS): ∥x − x̃∥2
N RMSE(x, x̃) ≜ (8)
n−m ∥x∥2
PMS ≜ × 100. (4)
n
where x is the original ECG signal and x̃ is the reconstructed
The ratio m/n will be called as the measurement ratio:
signal. A popular metric to measure the quality of reconstruc-
m tion of ECG signals is percentage root mean square difference
MR = . (5)
n (PRD):
The measurement ratio m n is not a good indicator of
compression ratio. If the sensing matrix Φ is Gaussian, then PRD(x, x̃) ≜ N RMSE(x, x̃) × 100 (9)
the measurement values are real valued. In literature using
Gaussian sensing matrices (e.g., [37]), it is unclear how The signal to noise ratio (SNR) is related to PRD as
many bits are being used to represent each floating point
measurement value for transmission. Under standard 32-bit SNR ≜ −20 log10 (0.01PRD). (10)
IEEE floating point format, each value would require 32-bits.
Then for MIT-BIH data the compression ratio in bits would As one desires higher compression ratios and lower PRD, one
11×n
be 32×m . The only way the ratio m n would make sense if can define a combined quality score (QS) as
the measurements are also quantized at 11 bits resolution.
However the impact of such quantization is not considered CR
QS = × 100. (11)
in the simulations. PRD
Now consider the case of a sparse binary sensing matrix.
Since it consists of only zeros and ones, hence for integer Zigel et al. [38] established a link between the diagnostic
inputs, it generates integer outputs. Thus, we can say that distortion and the easy to measure PRD metric. Table I shows
output of a sparse binary sensor are quantized by design. the classified quality and corresponding SNR (signal to noise
However, the range of values changes. Assume that the sensing ratio) and PRD ranges.
matrix has d ones per column. Then it has a total of nd ones.
Thus, each row will have on average nd 3
m ones. If we assume TABLE I
the input data to be in the range of [−1024, 1023] (under 11- Q UALITY OF R ECONSTRUCTION [38]
bit), then in the worst case, the range of output values may
Quality PRD SNR
1 For MIT-BIH Arrhythmia database, r = 11. Very good < 2% > 33 dB
2 This includes the overhead bits required for the stream header and frame Good 2-9% 20-33 dB
headers to be explained later. Undetermined ≥ 9% ≤ 20 dB
3 Since the ones are randomly placed, hence we won’t have same number
of ones in each row.
4

III. P ROPOSED C ODEC A RCHITECTURE TABLE III


F RAME HEADER
The ECG signal is split into windows of n samples each.
The windows of ECG signal are further grouped into frames Parameter Description Bits
of w windows each. The last frame may have less than w µy Mean value 16
windows. The encoder compresses the ECG signal frame by σy Standard deviation 16
frame. The encoder converts the ECG signal into a bitstream q Frame quantization parameter 3
r Frame range parameter 4
and the decoder reconstructs the ECG signal from the bit- nw Windows in frame 8
stream. nc Words of entropy coded data 16
The encoder first sends the encoder parameters in the form
of a stream header table II. Then for each frame of ECG signal,
Digital Compressive
it sends a frame header table III followed by a frame payload Samples
Windowing
Sensing
Flattening

consisting of the entropy coded measurements for the frame.


The encoding algorithm is presented in algorithm 1. The
Adaptive Adaptive Entropy Compressed
decoding algorithm is presented in algorithm 2. Quantization Clipping Coding Bits

A. Stream Header Fig. 1. Digital Compressive Sensing Encoder

The stream header table II consists of all necessary informa-


tion required to initialize the decoder. In particular, it contains Algorithm 1: Encoder algorithm
the pseudo-random generator key that can be used to reproduce Send stream header ;
the sparse binary sensing matrix used in the encoder by the Build sensing matrix Φ;
decoder, the number of samples per window (n), the number foreach frame of ECG signal as x do
X ← window(x) ;
of measurements per window (m), the number of ones per nw ← number of windows in the frame ;
column in the sensing matrix (d), and the maximum number // Sense
of windows in each frame of ECG signal (w). It contains a Y ← ΦX;
y ← flatten(Y);
flag indicating whether adaptive or fixed quantization will be // Adaptive quantization
used. It contains the limits on the normalized root mean square for q = qmax : qmin : −1 do
error for the adaptive quantization (ρ) and adaptive clipping ȳ ← 21q y ;
ỹ ← 2q ȳ;
(γ) steps. If fixed quantization is used, then it contains the ρ ←;
fixed quantization parameter value (q). Both ρ and γ in the if N RMSE(y, ỹ) ≤ ρ then
stream header are encoded using a simple decimal format break ;
end
a1e−b where a and b are encoded as 4-bit integers. end
Following [5], we construct a sparse binary sensing matrix // Quantized Gaussian model parameters
Φ of shape m × n. Each column of Φ consists of exactly µy ≤ round(mean(ȳ)) ;
σy ≤ ceiling(std(ȳ)) ;
d ones and m − d zeros, where the position of ones has σy ← max(σy , 1);
been randomly selected in each column. Identical algorithm // Adaptive range adjustment
is used to generate the sensing matrix using the stream for r = 2 : 8 do
ymin ← µy − rσy ;
header parameters in both the encoder and the decoder. The ymax ← µy + rσy ;
randomization is seeded with the parameter key sent in the ŷ ← clip(ȳ, ymin , ymax );
stream header. if N RMSE(y, ŷ) ≤ γ then
break ;
end
TABLE II end
S TREAM HEADER c ← ans code(ŷ, µy , σy , ymin , ymax ) ;
nc ← number of words in c;
Parameter Description Bits Send frame header(µy , σy , q, r, nw , nc );
Send frame payload(c);
key PRNG key for Φ 64
end
n Window size 12
m Number of measurements per window 12
d Number of ones per column in sensing matrix 6
w Number of windows per frame 8
adaptive Adaptive or fixed quantization flag 1 The input to the encoder is a frame of digital ECG samples at a
if (adaptive )
ρ N RMSE limit for adaptive quantization 8
given sampling rate fs . In the MIT-BIH database, the samples
else are digitally encoded in 11 bits per sample (unsigned). Let the
q Fixed quantization parameter 4 frame of ECG signal be denoted by x. The frame is split into
γ N RMSE limit for clipping 8
non-overlapping windows of n samples each. We shall denote
each window of n samples by vectors xi .
1) Windowing: A frame of ECG consists of multiple such
B. Encoding windows (up to a maximum of w windows per frame as per
Here we describe the encoding process for each frame. the encoder configuration). Let the windows by denoted by
Figure 1 shows a block diagram of the frame encoding process. x1 , x2 , . . . . Let there be w such windows. We can put them
5

together to form the (signed) signal matrix 4 : 6) Adaptive Clipping: We clip the values in ȳ to the range
  [µy − rσy , µy + rσy ] where r is the range parameter estimate
X = x1 x2 . . . xw . (12)
from the data. This is denoted as
2) Compressive Sensing: The digital compressive sensing ŷ = clip(ȳ, µy − rσy , µy + rσy ). (18)
is represented as
yi = Φxi (13) Similar to adaptive quantization, we vary r from 2 to 8 till we
have captured sufficient variation in ȳ and N RMSE(y, ŷ) ≤
where yi is the measurement vector for the i-th window. γ.
Combining the measurement vectors for each window, the 7) Entropy Coding: We then model the measurement values
measurement matrix for the entire record is given by in ŷ as a quantized Gaussian distributed random variable with
Y = ΦX. (14) mean µy , standard deviation σy , minimum value µy −rσy and
maximum value µy + rσy . We use the ANS entropy coder to
Note that by design, the sensing operation can be implemented encode ŷ into an array c of 32-bit integers (called words).
using just lookup and integer addition. The ones in each row This becomes the payload of the frame to be sent to the
of Φ identify the samples within the window to be picked up decoder. The total number of compressed bits in the frame
and summed. Consequently, Y consists of only signed integer payload is the length of the array nc times 32. Note that we
values. have encoded and transmitted ŷ and not the unclipped ȳ. ANS
3) Flattening: Beyond this point, the window structure of entropy coding is a lossless encoding scheme. Hence, ŷ will be
signal is not relevant for quantization and entropy coding reproduced faithfully in the decoder if there are no bit errors
purposes in our design. Hence, we flatten it (column by involved in the transmission 5 .
column) into a vector y of mw measurements. 8) Integer Arithmetic: The input to digital compressive
y = flatten(Y). (15) sensing is a stream of integers. The sensing process with
sparse binary sensing matrix can be implemented using integer
4) Quantization: Next, we perform a simple quantization of sums and lookup. It is possible to implement the computation
measurement values. If fixed quantization has been specified of approximate mean and standard deviation using integer
in the stream header, then for each entry y in y, we compute arithmetic. We can use the normalized mean square error based
thresholds for adaptive quantization and clipping steps. ANS
ȳ = y/2q . (16)
entropy coding is fully implemented using integer arithmetic.
For the whole measurement vector, we can write this as Thus, the proposed encoder can be fully implemented using
1 integer arithmetic.
ȳ = q y. (17)
2
This can be easily implemented in a computer as a signed right C. Decoding
shift by q bits (for integer values). This reduces the range of The decoder initializes itself by reading the stream header
values in y by a factor of 2q . and creating the sensing matrix to be used for the decoding
If adaptive quantization has been specified, then we vary the of compressive measurements frame by frame.
quantization parameter q from a value qmax = 6 down to a
value qmin = 0. For each value of q, we compute ȳ = 21q y. We Compressed
Bits
Entropy
Decoding
Inverse
Quantization
Windowing
then compute ỹ = 2q ȳ. We stop if N RMSE(y, ỹ) reaches a
limit specified by the parameter ρ in the stream header.
Reconstruction Decoded
5) Entropy Model: Before we explain the clipping step, we Algorithm
Flattening
Samples
shall describe our entropy model. We model the measurements
as samples from a quantized Gaussian distribution which can Fig. 2. Digital Compressive Sensing Decoder
only take integral values. First, we estimate the mean µy and
standard deviation σy of measurement values in ȳ. We shall Here we describe the decoding process for each frame.
need to transmit µy and σy in the frame header. We round the Figure 2 shows a block diagram for the decoding process.
values of µy and σy to the nearest integer so that it can be Decoding of a frame starts by reading the frame header which
transmitted efficiently as integers. provides the frame encoding parameters: µy , σy , q, r, nw , nc .
Entropy coding works with a finite alphabet. Accordingly, 1) Entropy Decoding: The frame header is used for build-
the quantized Gaussian model requires specification of the ing the quantized Gaussian distribution model for the decoding
minimum and maximum values that our quantized measure- of the entropy coded measurements from the frame payload.
ments can take. The range of values in ȳ must be clipped to The minimum and maximum values for the model are com-
this range. puted as:

4 PhysioNet provides the baseline values for each channel in their ECG ymin , ymax = µy − rσy , µy + rσy . (19)
records. Since the digital samples are unsigned, we have subtracted them by
the baseline value (1024 for 11-bit encoding). 11 bits mean that unsigned
nc tells us the number of words (4nc bytes) to be read from
values range from 0 to 2047. The baseline for zero amplitude is digitally the bitstream for the frame payload. The ANS decoder is used
represented as 1024. After baseline adjustment, the range of values becomes
[−1024, 1023]. 5 We assume that appropriate channel coding mechanism has been used.
6

Algorithm 2: Decoder algorithm D. Discussion


Read stream header ; 1) Measurement statistics: Several aspects of our encoder
Build sensing matrix Φ; architecture are based on the statistics of the measurements
while there is more data do
µy , σy , q, r, nw , nc ← read frame header ;
y. We collected the summary statistics including mean µ,
c ← read frame payload (nc ); standard deviation σ, range of values in y, skew and kurtosis
// Entropy model parameters for the measurement values for each of the ECG records in
ymin ← µy − rσy ;
ymax ← µy + rσy ;
the MIT-BIH database. These values have been reported for
ŷ ← ans decode(c, µy , σy , ymin , ymax ) ; one particular encoder configuration in table V. In addition,
// Inverse quantization we also compute the range divided by standard deviation rng σ
ỹ ← 2q ŷ;
and the iqr (inter quantile range) divided by standard deviation
Ỹ ← window(ỹ) ; iqr
X̃ ← reconstruct(Ỹ) ; σ .
x̃ ← flatten(X̃) ; 2) Gaussianity: The key idea behind the our entropy coding
end design is to model the measurement values as being sampled
from a quantized Gaussian distribution. Towards this, we
measured the skew and kurtosis for the measurements for each
to extract the encoded measurement values ŷ from the frame record as shown in table V for the sensing matrix configuration
payload. of m = 256, n = 512, d = 4. For a Gaussian distributed
2) Inverse Quantization: We then perform the inverse quan- variable, the skew should be close to 0 while kurtosis should
tization as be close to 3. While the skew is not very high, Kurtosis does
ỹ = 2q ŷ. (20) vary widely. Figure 3 shows the histograms of measurement
Next, we split the measurements into measurement windows values for 6 different records. Although the measurements are
of size m corresponding to the signal windows of size n. not Gaussian distributed, it is not a bad approximation. The
quantized Gaussian approximation works well in the entropy
Ỹ = window(ỹ). (21) coding. It is easy to estimate from the data.
We are now ready for the reconstruction of the ECG signal The best compression can be achieved by using the empir-
for each window. ical probabilities of different values in y in entropy coding.
3) Reconstruction: The architecture is flexible in terms of However, doing so would require us to transmit the empirical
choice of the reconstruction algorithm. probabilities as side information. This may be expensive. We
can estimate the improvement in compression overhead by use
X̃ = reconstruct(Ỹ). (22) of the quantized Gaussian approximation. Let P denote the
Each column (window) in Ỹ is decoded independently. In our empirical probability distribution of data and let Q denote the
experiments, we built a BSBL-BO (Block Sparse Bayesian corresponding quantized Gaussian distribution. Bamler in [40]
Learning-Bound Optimization) decoder [24], [26], [37]. Our show empirically that the overhead of using an approximation
implementation of BSBL is available as part of CR-Sparse distribution Q in place of P in ANS entropy coding is close
library [39]. As Zhang et al. suggest in [24], block sizes to the KL divergence KL(P||Q) which is given by
are user defined and are identical and no pruning of blocks  
X P(y)
is applied. Our implementation has been done under these KL(P||Q) = P(y) log2 . (24)
Q(y)
assumptions and is built using JAX so that it can be run y
on GPU hardware easily to speed up decoding. The only We computed the empirical distribution for y for each record
configurable parameter for this decoder is the block size which and measured its KL divergence with the corresponding quan-
we shall denote by b in the following. Once the samples tized Gaussian distribution. It is tabulated in the kld column in
have been decoded, we flatten (column by column) them to table V. It varies around 0.11±0.07 bits across the 48 records.
generate the sequence of decoded ECG samples to form the Thus, the overhead of using a quantized Gaussian distribution
ECG record. in place of the empirical probabilities can be estimated to be
x̃ = flatten(X̃). (23) 4 − 18%. One can see that the divergence tends to increase as
4) Alternate Reconstruction Algorithms: It is entirely pos- the kurtosis increases. We determined the Pearson correlation
sible to use a deep learning based reconstruction network coefficient between kurtosis and kld to be 0.67.
like CS-NET [27] in the decoder. We will need to train Figure 4 shows an example where the empirical distribution
the network with ỹ as inputs and x as expected output. is significantly different from the corresponding quantized
However, we haven’t pursued this direction yet as our focus Gaussian distribution due to the presence of a large number of
was to study the quantization and entropy coding steps in this 0 values in y. Note that this is different from the histograms
work. One concern with deep learning based architectures is in fig. 3 where the y values have been binned into 200 bins.
that the decoder network needs to be trained separately for Also, fig. 3 suggests that the empirical distributions vary
each encoder configuration and each database. The ability to widely from one record to another in the database. Hence
dynamically change the encoder/decoder configuration param- using a single fixed empirical distribution (e.g. the Huffman
eters is severely restricted in deep learning based architectures. code book preloaded into the device in [5]) may lead to lower
This limitation doesn’t exist with BSBL type algorithms. compression.
7

(a) Record 100 (b) Record 102 (c) Record 115

(d) Record 202 (e) Record 208 (f) Record 234


Fig. 3. Histograms of measurement values with the sparse binary sensing matrix with m = 256, n = 512, d = 4 in 200 bins

increasing till q = 4 and starts decreasing after that with a


massive drop at q = 7.

Fig. 4. Empirical and quantized Gaussian distributions for measurement


values y in record 234

3) Clipping: An entropy coder can handle with a finite set


of symbols only. Hence, the range of input values [measure-
ments coded as integers] must be restricted to a finite range. Fig. 5. Reconstruction of a small segment of record 100 for different values
This is the reason one has to choose a distribution with finite of q = 0, 2, 4, 5, 6, 7 with m = 256, n = 512, d = 4 under non-adaptive
support (like quantized Gaussian). From table V one can see quantization. Block size for the BSBL-BO decoder is 32.
that while the complete range of measurement values can be
up to 40-50x larger than the standard deviation, the iqr is
less than 1.5 times the standard deviation. In other words, the IV. R ESULTS
measurement values are fairly concentrated around its mean. A. Impact of quantization parameter
This can be visually seen from the histograms in Figure 3 also. In fig. 6, we study the impact of the quantization parameter
4) Quantization: Figure 5 demonstrates the impact of the q on different compression statistics at different values of
quantization step on the reconstruction quality for record 100 measurement ratio mr = m n for record 100. The number
under non-adaptive quantization. 6 windows of 512 samples of measurements (m) have been chosen to vary from 20%
were used in this example. The first panel shows the original to 60% of the window size (n) in different configurations.
(digital) signal. Remaining panels show the reconstruction at The quantization parameter varies from 0 to 7. Nonadaptive
different quantization parameters. The reconstruction visual quantization was used for this experiment.
quality is excellent up to q = 5 (PRD below 7%), good at (a) shows that reconstruction quality doesn’t change much
q = 6 (PRD at 9%) and clinically unacceptable at q = 7 from q = 0 till q = 4 after which it starts degrading. As
(with PRD more than 15%). One can see significant waveform expected, the SNR/PRD degrades as m is reduced. At 30%
distortions at q = 7. Also note how the quality score keeps and below, the PRD is above 9% which is unacceptable.
8

(b) shows that PSS (percentage space saving) increases V. C ONCLUSION AND F UTURE W ORK
linearly with q. As q increases, the size of the alphabet for
A. Contributions
entropy coding reduces and this leads to increased space
savings. The key contributions of our encoder architecture are the
(c) shows the variation of the quality score with q and we following:
can clearly see that quality score generally increases till q = 4 • We have presented a digital compressive sensing based
after which it starts decreasing. encoder design including adaptive digital quantization
The key result is shown in panel (d) which shows the and entropy coding which can be implemented entirely
variation of PSS with PMS at different values of q. PSS using integer arithmetic.
increases linearly with PMS. Also, PSS is much higher than • We have demonstrated that ANS based arithmetic coding
PMS at higher quantization levels. can lead to significant space savings.
We have also listed the numerical values in table IV. The • We have demonstrated that we don’t need a hard-coded
first column for q = 0 depicts the case where no quantization code-book for entropy coding. We can simply model the
has been applied, we are only doing clipping and entropy measurement values distributed as quantized Gaussian
coding. Since PSS is consistently higher than PMS, it is clear values for effective entropy coding.
that we need less than 11 bits per sample on an average • We use simple mean and standard deviation as the entropy
after entropy coding of the measurement values. Increasing model parameters which can be dynamically adapted. The
quantization linearly increases the PSS. Since up to q = 4, overhead of the side information is negligible.
there is no noticeable impact on PRD, as seen in panel (a), it • We have demonstrated up to 16% of additional space sav-
is safe to enjoy these savings in bit rate. At 40% PMS, one ings contributed by the quantization and entropy coding
can attain up to 69.3% PSS without losing any reconstruction steps.
quality while one can push further up to q = 6 to the PSS • We have described a complete bitstream specification for
of 79.9% with some degradation in quality while still staying carriage of all the side information needed to decode the
under acceptable PRD. entropy coded compressive measurements.
The software code implementing this codec and all scripts
TABLE IV for the experimental studies conducted in this work have been
PSS AT DIFFERENT VALUES OF q AND PMS FOR RECORD 100 released as open source software on GitHub [35].
q 0 1 2 3 4 5 6 7
PMS

40.0 47.5 53.0 58.4 63.8 69.3 74.6 79.9 84.8


B. Future Work
50.0 55.7 60.3 64.8 69.3 73.8 78.3 82.6 87.0
55.0 59.8 63.9 67.9 72.0 76.1 80.1 84.0 88.0 The reconstruction algorithm in the decoder is replaceable.
60.0 63.6 67.2 70.8 74.5 78.1 81.7 85.2 88.6
65.0 68.0 71.1 74.3 77.5 80.6 83.8 86.8 89.7 Deep learning based architectures can provide further boost
70.0 71.8 74.6 77.3 80.0 82.7 85.5 88.1 90.7
75.0 76.5 78.7 81.0 83.2 85.5 87.7 90.0 92.1 to the quality of reconstruction. The impact of quantization
80.0 81.0 82.8 84.6 86.4 88.2 90.0 91.8 93.6
and clipping on deep learning based reconstruction needs to
be further studied.
The quantization, clipping and entropy coding blocks in our
codec design are general purpose and should be useful in other
B. Compression across different records CS based applications (e.g., EEG, EMG, FECG etc.).
During the course of long term ECG monitoring, one may
Table VI shows the compression statistics for the encoder
need to change the encoder parameters (m, n, d) to respond
configuration of m = 256, n = 512, d = 4 and the decoder
to changing characteristics of the ECG data. Enhancements in
configuration of b = 32 (block size). The statistics are
the bitstream specification can be done accordingly.
further summarized in table VII. The column OH reports
the overhead in percentage of the bits needed to encode the One advantage of using fixed length coding for measure-
stream header and frame headers. The column Q SNR reports ments is random access to any part of the bitstream. This is
the quantization noise introduced due to the quantization and not straightforward in stream codes. ANS however makes it
clipping steps in the encoder. We can see that this configuration easy to build a seakable decoder by introducing checkpoints
achieves PSS between 66.05±2.78%. We achieve an additional [40]. Such checkpoints may be communicated as in the frame
16% of space savings over and above the PMS (50%). This headers.
additional space savings is the contribution due to adaptive
quantization and entropy coding. The PRD is 5.6 ± 1.8%. Out A PPENDIX
of the 48 records, only 4 records have a PRD greater than 9%.
A. Compressive Sensing
Maximum PRD is 10.2%. The quality score is 57.8 ± 18.6.
The header bits overhead is 0.21 ± 0.01% which is negligible. Under the digital CS paradigm, we assume that the signals
This configuration achieves of 3.7 ± 0.2 bits per sample. have passed through the analog to digital converter (ADC) and
Quantization SNR is 37.3 ± 0.9 dB. Our adaptive quantization are represented using signed integers at a specific resolution.
algorithm ensures that the quantization SNR doesn’t vary CS is an emerging signal compression paradigm that relies
much. on the sparsity of signals (in an appropriate basis) to compress
9

25.0
MR
0.2
22.5 90 70 0.25
0.3
20.0 0.35
0.4
80 60 0.45
17.5 0.5
0.6
15.0

PSS (%)
PRD %

70 50

QS
12.5
MR MR
0.2 0.2
10.0 0.25 0.25 40
0.3 60 0.3
7.5 0.35 0.35
0.4 0.4
0.45 0.45 30
5.0 0.5 50 0.5
0.6 0.6
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
q q q

(a) q vs PRD (b) q vs PSS (c) q vs Quality Score

6
MR MR
0.2 10 0.2
90 0.25 0.25
5 0.3 0.3
0.35 9 0.35
0.4 0.4
80 0.45 0.45
0.5 8 0.5
4
0.6 0.6
7
PSS (%)

bpm
bps
70
3
q 6
0
60 1 5
2 2
3
4 4
5
50 6 1
7 3

40 45 50 55 60 65 70 75 80 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
PMS (%) q q

(d) PMS vs PSS (e) q vs bps (f) q vs bpm


m
Fig. 6. Variation of compression statistics vs quantization parameter at different measurement ratios n
for record 100

TABLE V TABLE VI
M EASUREMENT STATISTICS AT m = 256, n = 512, d = 4 C OMPRESSION STATISTICS AT m = 256, n = 512, d = 4

rng iqr record CR PSS bpm bps OH Q SNR SNR PRD QS


record µy σy iqr rng skew kurtosis kld
σ σ
100 3.3 69.3 6.7 3.4 0.2 36.1 25.9 5.1 64.2
100 -490.0 224.4 293.0 2562 -0.5 3.7 0.049 11.4 1.3
101 3.1 68.2 7.0 3.5 0.2 37.0 25.9 5.0 62.4
101 -454.8 287.1 323.0 9647 -0.4 15.4 0.13 33.6 1.1
102 3.2 68.6 6.9 3.5 0.2 35.9 24.4 6.0 52.8
102 -393.4 197.0 257.2 2035 -0.6 3.7 0.053 10.3 1.3
103 3.0 66.7 7.3 3.7 0.2 38.2 26.1 5.0 60.5
103 -370.3 285.9 302.0 8824 -0.8 15.9 0.16 30.9 1.1
104 3.0 67.0 7.3 3.6 0.2 36.9 20.4 9.6 31.6
104 -360.1 231.5 284.0 4204 -0.6 5.0 0.071 18.2 1.2
105 3.0 66.3 7.4 3.7 0.2 36.5 24.8 5.8 51.5
105 -360.2 346.7 326.0 13704 -0.4 22.5 0.24 39.5 0.9
106 2.8 64.7 7.8 3.9 0.2 37.8 24.4 6.0 47.1
106 -284.7 270.6 314.0 4135 -0.1 4.4 0.046 15.3 1.2
107 2.7 62.5 8.3 4.1 0.2 38.5 26.8 4.6 58.2
107 -372.2 624.6 747.0 9639 -0.5 4.6 0.062 15.4 1.2
108 2.9 66.0 7.5 3.7 0.2 37.6 24.4 6.0 48.8
108 -365.9 422.3 336.0 12986 -0.2 22.1 0.36 30.7 0.8
109 3.0 66.3 7.4 3.7 0.2 37.5 27.3 4.3 68.6
109 -368.5 335.5 389.0 7487 0.2 7.3 0.064 22.3 1.2
111 2.8 64.9 7.7 3.9 0.2 38.3 24.6 5.9 48.0
111 -262.1 307.7 306.0 11998 -0.9 23.1 0.2 39.0 1.0
112 3.2 68.3 7.0 3.5 0.2 35.9 28.7 3.7 85.6
112 -1315.8 538.9 724.0 7839 -0.6 4.1 0.088 14.5 1.3
113 2.8 63.9 7.9 4.0 0.2 38.0 26.0 5.0 55.0
113 -248.3 336.7 379.0 6121 -0.1 5.2 0.064 18.2 1.1
114 3.0 66.2 7.4 3.7 0.2 37.6 22.5 7.5 39.6
114 -248.7 219.4 235.0 8756 1.2 30.4 0.14 39.9 1.1
115 3.1 67.4 7.2 3.6 0.2 36.3 27.5 4.2 72.6
115 -780.8 461.1 559.0 9715 -0.8 6.7 0.094 21.1 1.2
116 3.0 66.5 7.4 3.7 0.2 36.2 27.6 4.2 71.6
116 -1498.1 874.9 930.0 35083 -0.3 22.7 0.22 40.1 1.1
117 3.2 68.3 7.0 3.5 0.2 35.6 28.8 3.6 87.0
117 -1363.0 560.1 748.0 9179 -0.7 4.4 0.098 16.4 1.3
118 3.1 67.4 7.2 3.6 0.2 38.3 27.1 4.4 69.4
118 -1373.1 613.9 809.0 10935 -0.5 4.4 0.1 17.8 1.3
119 3.0 67.1 7.2 3.6 0.2 37.5 29.0 3.6 85.6
119 -1378.1 629.0 825.0 8363 -0.6 3.8 0.074 13.3 1.3
121 3.1 67.6 7.1 3.6 0.2 37.1 30.7 2.9 106.5
121 -1295.5 635.2 763.0 17012 -1.4 13.1 0.16 26.8 1.2
122 3.1 68.0 7.0 3.5 0.2 35.6 27.9 4.0 78.0
122 -1349.7 555.2 747.0 7870 -0.6 3.7 0.079 14.2 1.3
123 3.2 68.6 6.9 3.5 0.2 35.6 28.3 3.8 82.8
123 -1274.2 514.1 699.0 6603 -0.6 3.8 0.084 12.8 1.4
124 3.0 67.2 7.2 3.6 0.2 37.3 30.0 3.2 96.7
124 -1293.1 678.9 829.0 11886 -0.6 5.4 0.1 17.5 1.2
200 2.8 63.7 8.0 4.0 0.2 37.5 22.2 7.8 35.3
200 -168.8 261.8 288.0 6285 -0.3 7.7 0.08 24.0 1.1
201 3.0 67.0 7.3 3.6 0.2 38.1 26.2 4.9 61.8
201 -255.9 189.5 196.0 9986 0.7 55.1 0.16 52.7 1.0
202 2.9 65.1 7.7 3.8 0.2 38.4 25.8 5.1 55.9
202 -270.8 262.1 274.0 11421 -1.0 33.0 0.16 43.6 1.0
203 2.8 64.0 7.9 4.0 0.2 38.2 22.7 7.3 38.0
203 -271.1 471.7 455.0 13371 -0.1 15.1 0.2 28.3 1.0
205 3.2 69.1 6.8 3.4 0.2 36.2 25.8 5.1 62.9
205 -489.1 233.6 301.0 3257 -0.6 4.1 0.059 13.9 1.3
207 2.8 64.6 7.8 3.9 0.2 37.4 25.5 5.3 53.4
207 -273.5 339.1 344.0 11400 -1.2 18.1 0.22 33.6 1.0
208 2.8 63.9 8.0 4.0 0.2 37.9 24.7 5.8 47.7
208 -264.6 427.3 410.0 12449 0.4 16.5 0.18 29.1 1.0
209 2.8 64.5 7.8 3.9 0.2 38.4 20.3 9.7 29.0
209 -263.6 249.6 292.0 4731 -0.5 4.9 0.065 19.0 1.2
210 2.9 65.3 7.6 3.8 0.2 38.3 24.4 6.1 47.7
210 -250.9 228.3 249.0 7909 0.5 17.2 0.087 34.6 1.1
212 2.8 64.3 7.9 3.9 0.2 37.8 20.1 9.9 28.2
212 -249.7 296.9 336.0 7232 -0.5 7.3 0.088 24.4 1.1
213 2.7 63.6 8.0 4.0 0.2 38.6 22.7 7.3 37.6
213 -353.5 523.1 635.0 6528 -0.2 4.0 0.039 12.5 1.2
214 2.8 64.0 7.9 4.0 0.2 37.6 26.7 4.6 60.0
214 -257.8 315.3 369.0 4248 0.2 4.1 0.04 13.5 1.2
215 2.9 65.3 7.6 3.8 0.2 37.5 19.8 10.2 28.2
215 -243.0 208.7 256.0 3544 -0.3 4.2 0.039 17.0 1.2
217 2.8 64.3 7.8 3.9 0.2 37.1 27.0 4.5 62.5
217 -264.2 458.7 528.0 11928 -0.3 8.5 0.089 26.0 1.2
219 3.0 66.7 7.3 3.7 0.2 36.9 29.0 3.6 84.4
219 -938.8 687.6 811.0 10727 -0.4 4.6 0.074 15.6 1.2
220 3.3 69.6 6.7 3.3 0.2 35.1 26.6 4.7 69.9
220 -864.8 378.5 503.0 4731 -0.5 3.7 0.059 12.5 1.3
221 2.9 65.1 7.7 3.8 0.2 38.9 24.9 5.7 50.5
221 -262.4 228.0 270.0 3000 -0.1 4.1 0.036 13.2 1.2
222 2.9 65.6 7.6 3.8 0.2 37.6 22.0 7.9 36.8
222 -250.4 212.1 252.0 4327 -0.8 5.7 0.085 20.4 1.2
223 3.1 68.0 7.0 3.5 0.2 36.3 27.9 4.0 77.8
223 -818.4 419.1 529.0 7449 -0.6 5.0 0.068 17.8 1.3
228 2.8 64.5 7.8 3.9 0.2 36.9 22.9 7.1 39.4
228 -221.5 356.9 294.0 11962 -1.0 24.0 0.35 33.5 0.8
230 2.8 64.6 7.8 3.9 0.2 38.2 24.7 5.9 48.2
230 -271.8 285.3 308.0 7044 -0.4 8.6 0.1 24.7 1.1
231 2.9 65.6 7.6 3.8 0.2 38.2 25.5 5.3 54.8
231 -253.1 228.9 278.0 3250 -0.3 4.2 0.042 14.2 1.2
232 3.0 66.8 7.3 3.7 0.2 37.9 22.8 7.3 41.4
232 -242.2 191.9 243.0 2951 -0.4 4.0 0.039 15.4 1.3
233 2.8 63.8 8.0 4.0 0.2 37.9 24.9 5.7 48.8
233 -245.9 427.2 480.0 9636 -0.1 6.5 0.083 22.6 1.1
234 2.8 64.6 7.8 3.9 0.2 37.3 25.2 5.5 51.6
234 -257.2 345.1 316.0 12002 -0.7 19.8 0.25 34.8 0.9

using incoherent measurements. The basic model can be linear measurements. Each row of Φ represents one linear
expressed as: measurement as a linear functional on x. y is a measurement
y = Φx + e (25) vector consisting of m distinct measurements done on x. By
design Φ is a full rank matrix. Hence every set of m columns
where x ∈ Rn is a signal to compress (of length n). of Φ is linearly independent. e ∈ Rm is the error/noise
Φ ∈ Rm×n is a sensing matrix which compresses x via introduced during the measurement process. In our digital CS
10

TABLE VII to provide the value of noise variance as input. It is able to


S UMMARY COMPRESSION STATISTICS AT m = 256, n = 512, d = 4 estimate λ within a algorithm.
CR PSS bpm bps OH Q SNR SNR PRD QS
The estimate of x under Bayesian learning framework is
mean 3.0 66.0 7.5 3.7 0.2 37.3 25.4 5.6 57.8
given by the posterior mean of x given the measurements y.
std 0.2 1.8 0.4 0.2 0.0 0.9 2.6 1.8 18.6
min 2.7 62.5 6.7 3.3 0.2 35.1 19.8 2.9 28.2
50% 2.9 66.1 7.5 3.7 0.2 37.5 25.7 5.2 54.9
max 3.3 69.6 8.3 4.1 0.2 38.9 30.7 10.2 106.5 C. Entropy Coding
From the perspective of entropy coding, a symbol is a unit
of information to be transmitted. For us, each value in a
paradigm, the noise is introduced by the quantization step in
measurement vector is a symbol. An alphabet is the set of
our encoder. In our case x is a window of a raw ECG record
all symbols that can be transmitted. For us, it is the set of
from one channel/lead. Often x is not sparse by itself, but is
integral values that the entries in a measurement vector can
sparse in some orthonormal basis Ψ expressed as x = Φα and
take. A message is a sequence of symbols to be transmitted.
the representation α is sparse. ECG signals exhibit sparsity in
For us, it is the sequence of all measurement values. There
wavelet bases.
are two primary types of coding schemes: symbol codes and
Most natural signals have richer structures beyond sparsity. stream codes. A symbol code (e.g., Huffman code) assigns
A common structure is natural signals is a block/group struc- a unique bit string to each symbol. A stream code maps a
ture [41]. We introduce the block/group structure on x as message (a sequence of symbols) to a (large) integer. Symbol
codes approach optimal compression only when the Shannon

x = x1 x2 . . . xg (26)
information content of each symbol happens to be close to
where each xi is a block of b values. The signal x consists of powers of 2. In particular, Huffman coding is not optimal since
g such blocks/groups. Under the block sparsity model, only a it encodes each symbol using an integral number of bits. It is
few k ≪ g blocks are nonzero (active) in the signal x however, not capable of encoding symbols using fractional number of
the locations of these blocks are unknown. We can rewrite the bits.
sensing equation as: In contrast, stream codes can achieve near optimal com-
g
X pression for any probability distribution. Arithmetic Coding
y= Φ i xi + e (27) (AC) [42], [43] encodes the entire message of symbols into
i=1 a single (large) integer enabling fractional bits for different
by splitting the sensing matrix into blocks of columns appro- symbols in a message. A recent family of entropy coders called
priately. Asymmetric Numeral Systems (ANS) [33] allows for faster
implementation thanks to directly operating on a single natural
number representing the current information. It combines the
B. Block Sparse Bayesian Learning
speed of Huffman coding with compression rate of arithmetic
Under the sparse Bayesian framework, each block is as- coding. The constriction library [40] provides a nice way to
sumed to satisfy a parametrized multivariate Gaussian distri- model each symbol in a message as a sample from a given
bution: probability distribution and then encode it using ANS.
P(xi ; γi , Bi ) = N (0, γi Bi ), ∀ i = 1, . . . , g. (28)
R EFERENCES
The covariance matrix Bi captures the intra block correlations.
[1] T. Collins, B. Mikkelsen, and S. Axelrod, “Interact, engage or partner?
We further assume that the blocks are mutually uncorrelated. working with the private sector for the prevention and control of non-
The prior of x can then be written as communicable diseases,” Cardiovascular diagnosis and therapy, vol. 9,
no. 2, p. 158, 2019.
P(x; {γi , Bi }i ) = N (0, Σ0 ) (29) [2] H. Cao, V. Leung, C. Chow, and H. Chan, “Enabling technologies for
wireless body area networks: A survey and outlook,” IEEE Communi-
where cations Magazine, vol. 47, no. 12, pp. 84–93, 2009.
[3] A. Milenković, C. Otto, and E. Jovanov, “Wireless sensor networks for
Σ0 = diag{γ1 B1 , . . . , γg Bg }. (30) personal health monitoring: Issues and an implementation,” Computer
communications, vol. 29, no. 13-14, pp. 2521–2533, 2006.
We also model the correlation among the values within each [4] G. B. Moody and R. G. Mark, “The impact of the mit-bih arrhyth-
active block as an AR-1 process. Under this assumption the mia database,” IEEE Engineering in Medicine and Biology Magazine,
matrices Bi take the form of a Toeplitz matrix vol. 20, no. 3, pp. 45–50, 2001.
[5] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Com-
. . . rb−1 pressed sensing for real-time energy-efficient ecg compression on wire-
 
1 r
less body sensor nodes,” IEEE Transactions on Biomedical Engineering,
 r 1 . . . rb−2  vol. 58, no. 9, pp. 2456–2466, 2011.
B= . (31)
 
.. ..  [6] B. Singh, A. Kaur, and J. Singh, “A review of ecg data compression
 .. . .  techniques,” International journal of computer applications, vol. 116,
rb−1 rb−2 ... 1 no. 11, 2015.
[7] S. O. Rajankar and S. N. Talbar, “An electrocardiogram signal compres-
where r is the AR-1 model coefficient. This constraint signif- sion techniques: a comprehensive review,” Analog Integrated Circuits
icantly reduces the model parameters to be learned. and Signal Processing, vol. 98, no. 1, pp. 59–74, 2019.
[8] D. Craven, B. McGinley, L. Kilmartin, M. Glavin, and E. Jones,
Measurement error is modeled as independent zero mean “Compressed sensing for bioelectric signals: A review,” IEEE journal
Gaussian noise P(e; λ) ∼ N (0, λI). BSBL doesn’t require us of biomedical and health informatics, vol. 19, no. 2, pp. 529–540, 2014.
11

[9] S. S. Kumar and P. Ramachandran, “Review on compressive sensing [32] S. A. Chouakri, O. Djaafri, and A. Taleb-Ahmed, “Wavelet transform
algorithms for ecg signal for iot based deep learning framework,” and huffman coding based electrocardiogram compression algorithm:
Applied Sciences, vol. 12, no. 16, p. 8368, 2022. Application to telecardiology,” in Journal of Physics: Conference Series,
[10] H. Al-Nashash, “A dynamic fourier series for the compression of ecg vol. 454, no. 1. IOP Publishing, 2013, p. 012086.
using fft and adaptive coefficient estimation,” Medical engineering & [33] J. Duda, “Asymmetric numeral systems: entropy coding combining
physics, vol. 17, no. 3, pp. 197–203, 1995. speed of huffman coding with compression rate of arithmetic coding,”
[11] L. V. Batista, E. U. K. Melcher, and L. C. Carvalho, “Compression arXiv preprint arXiv:1311.2540, 2013.
of ecg signals by optimized quantization of discrete cosine transform [34] Collet, Yann, “Zstandard,” 2022. [Online]. Available: https://facebook.
coefficients,” Medical engineering & physics, vol. 23, no. 2, pp. 127– github.io/zstd
134, 2001. [35] Kumar, Shailesh, “Compressive sensing based codec for ecg,” 2022.
[12] A. Bendifallah, R. Benzid, and M. Boulemden, “Improved ecg com- [Online]. Available: https://github.com/shailesh1729/ecg
pression method using discrete cosine transform,” Electronics letters, [36] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C.
vol. 47, no. 2, p. 1, 2011. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E.
[13] A. Djohan, T. Q. Nguyen, and W. J. Tompkins, “Ecg compression Stanley, “Physiobank, physiotoolkit, and physionet: components of a
using discrete symmetric wavelet transform,” in Proceedings of 17th new research resource for complex physiologic signals,” circulation, vol.
International Conference of the Engineering in Medicine and Biology 101, no. 23, pp. e215–e220, 2000.
Society, vol. 1. IEEE, 1995, pp. 167–168. [37] Z. Zhang, S. Wei, D. Wei, L. Li, F. Liu, and C. Liu, “Comparison
[14] Z. Lu, D. Y. Kim, and W. A. Pearlman, “Wavelet compression of ecg of four recovery algorithms used in compressed sensing for ecg signal
signals by the set partitioning in hierarchical trees algorithm,” IEEE processing,” in 2016 Computing in Cardiology Conference (CinC).
transactions on Biomedical Engineering, vol. 47, no. 7, pp. 849–856, IEEE, 2016, pp. 401–404.
2000. [38] Y. Zigel, A. Cohen, and A. Katz, “The weighted diagnostic distortion
[15] M. Pooyan, A. Taheri, M. Moazami-Goudarzi, and I. Saboori, “Wavelet (wdd) measure for ecg signal compression,” IEEE transactions on
compression of ecg signals using spiht algorithm,” International Journal biomedical engineering, vol. 47, no. 11, pp. 1422–1430, 2000.
of signal processing, vol. 1, no. 3, p. 4, 2004. [39] S. Kumar, “Cr-sparse: Hardware accelerated functional algorithms for
[16] B. S. Kim, S. K. Yoo, and M.-H. Lee, “Wavelet-based low-delay sparse signal processing in python using jax,” Journal of Open Source
ecg compression algorithm for continuous ecg transmission,” IEEE Software, vol. 6, no. 68, p. 3917, 2021.
Transactions on Information Technology in Biomedicine, vol. 10, no. 1, [40] R. Bamler, “Understanding entropy coding with asymmetric nu-
pp. 77–83, 2006. meral systems (ans): a statistician’s perspective,” arXiv preprint
[17] D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Trans- arXiv:2201.01741, 2022.
actions on, vol. 52, no. 4, pp. 1289–1306, 2006. [41] Y. C. Eldar, P. Kuppinger, and H. Bolcskei, “Block-sparse signals:
[18] R. G. Baraniuk, “Compressive sensing [lecture notes],” Signal Process- Uncertainty relations and efficient recovery,” IEEE Transactions on
ing Magazine, IEEE, vol. 24, no. 4, pp. 118–121, 2007. Signal Processing, vol. 58, no. 6, pp. 3042–3054, 2010.
[19] E. J. Candès, “Compressive sampling,” in Proceedings of the Interna- [42] J. Rissanen and G. G. Langdon, “Arithmetic coding,” IBM Journal of
tional Congress of Mathematicians: Madrid, August 22-30, 2006: invited research and development, vol. 23, no. 2, pp. 149–162, 1979.
lectures, 2006, pp. 1433–1452. [43] I. H. Witten, R. M. Neal, and J. G. Cleary, “Arithmetic coding for data
[20] E. J. Candès and M. B. Wakin, “An introduction to compressive compression,” Communications of the ACM, vol. 30, no. 6, pp. 520–540,
sampling,” Signal Processing Magazine, IEEE, vol. 25, no. 2, pp. 21–30, 1987.
2008.
[21] E. J. Candes and T. Tao, “Near-optimal signal recovery from random
projections: Universal encoding strategies?” Information Theory, IEEE
Transactions on, vol. 52, no. 12, pp. 5406–5425, 2006.
[22] L. F. Polania, R. E. Carrillo, M. Blanco-Velasco, and K. E. Barner,
“Compressed sensing based method for ecg compression,” in 2011 IEEE
international conference on acoustics, speech and signal processing
(ICASSP). IEEE, 2011, pp. 761–764.
[23] J. Zhang, Z. Gu, Z. L. Yu, and Y. Li, “Energy-efficient ecg compression
on wireless biosensors via minimal coherence sensing and weighted l1
minimization reconstruction,” IEEE Journal of Biomedical and Health
Informatics, vol. 19, no. 2, pp. 520–528, 2014.
[24] Z. Zhang, T.-P. Jung, S. Makeig, and B. D. Rao, “Compressed sensing
for energy-efficient wireless telemonitoring of noninvasive fetal ecg
via block sparse bayesian learning,” IEEE Transactions on Biomedical
Engineering, vol. 60, no. 2, pp. 300–309, 2012.
[25] Z. Zhang, T.-P. Jung, S. Makeig, Z. Pi, and B. D. Rao, “Spatiotemporal
sparse bayesian learning with applications to compressed sensing of mul-
tichannel physiological signals,” IEEE transactions on neural systems
and rehabilitation engineering, vol. 22, no. 6, pp. 1186–1197, 2014.
[26] Z. Zhang and B. D. Rao, “Extension of sbl algorithms for the recovery
of block sparse signals with intra-block correlation,” IEEE Transactions
on Signal Processing, vol. 61, no. 8, pp. 2009–2015, 2013.
[27] H. Zhang, Z. Dong, Z. Wang, L. Guo, and Z. Wang, “Csnet: A
deep learning approach for ecg compressed sensing,” Biomedical Signal
Processing and Control, vol. 70, p. 103065, 2021.
[28] M. Mangia, L. Prono, A. Marchioni, F. Pareschi, R. Rovatti, and G. Setti,
“Deep neural oracles for short-window optimized compressed sensing
of biosignals,” IEEE transactions on biomedical circuits and systems,
vol. 14, no. 3, pp. 545–557, 2020.
[29] L. F. Polanı́a and R. I. Plaza, “Compressed sensing ecg using re-
stricted boltzmann machines,” Biomedical Signal Processing and Con-
trol, vol. 45, pp. 237–245, 2018.
[30] F. Picariello, G. Iadarola, E. Balestrieri, I. Tudosa, and L. De Vito,
“A novel compressive sampling method for ecg wearable measurement
systems,” Measurement, vol. 167, p. 108259, 2021.
[31] K. Luo, J. Li, and J. Wu, “A dynamic compression scheme for energy-
efficient real-time wireless electrocardiogram biosensors,” IEEE Trans-
actions on Instrumentation and Measurement, vol. 63, no. 9, pp. 2160–
2169, 2014.

You might also like