Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Contents

ABSTRACT:..........................................................................................................................................3
INTRODUCTION:................................................................................................................................4
OBJECTIVES:......................................................................................................................................4
METHODOLOGY:...............................................................................................................................5
RESULT AND DISCUSSION:.............................................................................................................6
RESULT:................................................................................................................................................6
Index:.......................................................................................................................................................9
Figure 1: A real-time digital signal processing system................................................................................5
Figure 2: noise and frequency domain.........................................................................................................7
Figure 3: after using filter............................................................................................................................8
Figure 4: getting original signal...................................................................................................................9
ABSTRACT:

Digital hardware can process physical signals thanks to analog-to-digital


conversion. This conversion is done in two steps: sampling, which converts continuous-time
signals to discrete-time signals, and quantization, which uses a finite number of bits to represent
continuous-amplitude quantities. When operating at high rates and fine resolutions, this
conversion can be expensive because it is often done out using generic uniform mappings that
are unaware of the purpose for which the signal is obtained. In this work, we construct data-
driven task-oriented analog-to-digital converters (ADCs) that learn how to translate an analogue
signal into a sampled digital representation in order to carry out the system task effectively. We
offer a model for sampling and quantization that accurately captures these processes while also
enabling the system to learn non-uniform mappings from training data. Our numerical findings
show that the suggested method outperforms using traditional uniform ADCs while obtaining
performance that is comparable to operating without quantization limitations.
INTRODUCTION:

Digital hardware is used by numerous electronic systems to process


physical signals. Analog-to-digital conversion is a technique used by digital signal processors to
represent analogue numbers as a set of bits. Two steps are involved in converting a continuous-
time (CT) signal with continuous-amplitude values to a finite-bit representation: In order to
process the analogue signal digitally, it must first be sampled into a discrete-time process and
then quantized into discrete amplitude values. Scalar analog-to-digital converters are frequently
used for analogue signal acquisition (ADCs). These devices take consistently spaced temporal
samples of the CT signal and convert it to a digital representation by evenly mapping the actual
line. This acquisition approach is straightforward, but it has limitations when it comes to
effectively representing digital signals, especially when it operates with a low quantization
resolution and a limiting sample rate. Additionally, this process is used no matter what task
requires the acquisition of an analogue signal into a digital representation. Here, we suggest a
task-based acquisition system that focuses on classification tasks and uses scalar ADCs for
signals that follow a finite basis expansion paradigm. We propose a data-driven approach based
on ML since analytically deriving task-based techniques is challenging and frequently
necessitates imposing a constrained framework. We build the system so that it can dependably
complete its mission by learning its sampling function, quantization rule, and analogue and
digital processing from training data. The continuous-to-discrete character of sampling and
quantization mappings poses a significant problem for the design of ML-based ADCs and their
integration into deep neural networks (DNNs): The implementation of traditional training
methods based on back propagation is restricted by the fact that these procedures are either non-
differentiable or nullify the gradient. To combat this, we also use a soft-to-hard strategy.

OBJECTIVES:

ADCs and DACs are incredibly helpful tools that let us connect real-world
events—which are typically analog—with microprocessors—which are less expensive and offer
better precision and accuracy than their analogue counterparts—for monitoring or control.
METHODOLOGY:

On each falling or rising edge of the sample clock, the analogue signal is sampled by the
analogue to digital converter. The ADC extracts the analogue signal, measures it, and then
transforms it into a digital value once per cycle. The ADC approximates the signal with fixed
precision before converting the output data into a series of digital values.

Sampling is the process of changing an initial continuous signal into a discrete time
representation. It changes analogue signals into a series of impulses, each of which represents the
signal's amplitude at a certain instant. Sampling can be used to generate findings that are similar
in two or more dimensions for functions that fluctuate in space, time, or any other dimension. Let
s(t) be a continuous function (or "signal") to be sampled for functions that fluctuate over time,
and let sampling be carried out by taking a measurement of the continuous function's value every
T seconds (referred to as the sampling interval or the sampling period). The sequence s (nT), for
integer values of n, is then used to represent the sampled function. The average number of
samples obtained in one second (samples per second) is known as the sampling frequency or
sampling rate, abbreviated as fs.

‘‹•‡

‫ݔ‬ሺ‫ݐ‬ሻ ‫݊ݔ‬ ‫݊ݕ‬ ‫ݕ‬ሺ‫)ݐ‬


Analogue DAC
signal ADC DSP
Source

Figure 1: A real-time digital signal processing system

The quantity of various sample values that can be represented in a digital quantity is referred to
as the quantization level. Rounding and truncation are the two methods of quantization used in
the A/D process. In rounding, one numerical value is substituted with another that is roughly
equivalent, whereas in truncation, one numerical value is equivalent. In mathematics and digital
signal processing, quantization is the process of converting values from a large collection of
inputs—often a continuous set—to values from a smaller, (countable) set of outputs.

Typical quantization procedures include truncation and rounding. Since rounding is typically
required when representing a signal in digital form, quantization is involved to some extent in
almost all digital signal processing. The fundamental building block of virtually all loss
compression methods is quantization. Quantization error is the difference between an input value
and its quantized value, such as round-off error. A quantize is a component or algorithmic
operation that conducts quantization.

Consider an analogue source that generates an analogue signal that combines three sinusoidal
waves with varying peak amplitudes and three frequencies, f 1, f 2, and f 3, as shown below:

x ( t )=2 cos (2 π ¿ f 1 t )+ 6 cos(2 π ¿ f 2 t)+3 cos (2 π ¿ f 3 t)¿ ¿ ¿

RESULT AND DISCUSSION:

The result based on analogue to digital signal conversion. In


this we take an analogue signal and then we convert it into digital and add noise with in time
domain and frequency domain then we recovered the corrupted signal and filter that we use the
filter to get the original signal.

After all doing this we get the original signal which we use as an input for the analogue system.
So here we use filter for the purpose of getting the original signal.

RESULT:

The following are the result after we coded the signal.


noise in time domain

1
x(t)

0.5

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Time -3
x 10
-8
x 10 noise in Frequency Domain
4

3
ampiltude

0
0 0.5 1 1.5 2 2.5
f (Hz) 4
x 10

Figure 2: noise and frequency domain


1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Figure 3: after using filter


noise in time domain

1
x(t)

0.5

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Time -3
x 10
-8
x 10 noise in Frequency Domain
4

3
ampiltude

0
0 0.5 1 1.5 2 2.5
f (Hz) 4
x 10

Figure 4: getting original signal

Index:
close all;
clear all;
clc
fs = 44000;
T = 1/fs;
t = -0.5:T:0.5;
L = length(t);
x = 1/(0.4*sqrt(2*pi))*(exp(-t.^2/(2*(0.1*1e-3)^2)));
subplot(211)
plot(t,x)
title('noise in time domain')
xlabel('Time')
ylabel('x(t)')
axis([-1e-3 1e-3 0 1.1])
n = 2^nextpow2(L);
Y = fft(x,n);
f = fs*(0:(n/2))/n;
P = abs(Y/n).^2;
subplot(212)
plot(f,P(1:n/2+1))
title('noise in Frequency Domain')
xlabel('f (Hz)')
ylabel('ampiltude')
y = filter(fs,t,x);
figure
plot(t,x)
hold on
plot(t,Y)
legend('filterd')
figure
idx = 1:10:numel(x)
xds = x(idx);
plot(t,xds);
hold on
plot(t,idx)
legend('recovered')

You might also like