DCO Presentation 5 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

BSc IT

Digital and Computer Organization


Semester - III

Copyright@ Amity University


Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Copyright@ Amity University
Direct Memory Access (DMA)
• Direct memory access (DMA) is a feature of modern computers
and microprocessors that allows certain hardware subsystems within the
computer to access system memory for reading and / or writing independently
of the central processing unit.

• Many hardware systems use DMA including disk drive controllers, graphics
cards, network cards and sound cards.

• DMA is also used for intra-chip data transfer in multi-core processors, especially
in multiprocessor system-on-chips, where its processing element is equipped
with a local memory (often called scratchpad memory).

• DMA is used for transferring data between the local memory and the main
memory. Computers that have DMA channels can transfer data to and from
devices with much less CPU overhead than computers without a DMA channel
Copyright@ Amity University
Direct Memory Access (DMA)
• DMA is an essential feature of all modern computers, as it allows
devices to transfer data without subjecting the CPU to a heavy
overhead. Otherwise, the CPU would have to copy each piece of data
from the source to the destination, making itself unavailable for other
tasks.

• A DMA transfer copies a block of memory from one device to another.


While the CPU initiates the transfer by issuing a DMA command, it
does not execute it. For so-called "third party" DMA, as is normally
used with the ISA bus, the transfer is performed by a DMA controller
which is typically part of the motherboard chipset.

Copyright@ Amity University


Uses of DMA Transfer
• DMA unit has some intelligence and decision making ability in its
control logic, but it must be told what operations to be performed.
This is done by having the CPU store information into DMA
controller. This is called programming the DMA.

• Useful for transferring bulk amounts of data.

• The data can be transferred at fastest rate.

• CPU can be utilized for other jobs during the process of DMA.

Copyright@ Amity University


Steps of DMA Transfer Process
1. The peripheral controller places an interrupt request signal on the
10 bus signifying that it is ready to transfer data.

2. The DMA recognizes that the interrupt request is from the device it
is controlling.

3. The DMA sends an interrupt acknowledge to the-peripheral

4. The peripheral places data on the I/O bus.

5. The DMA stores the data into its data assembly register.

6. The DMA takes control of the system bus.

7. The DMA places the contents of its memory address register onto
the system bus along with a memory write request.

Copyright@ Amity University


Direct Memory Access (DMA)

Copyright@ Amity University


Interrupt priority level
• An integer based IPL may be as small as a single bit, with just two
values: 0 (all interrupts enabled) or 1 (all interrupts disabled).

• However, some architectures permit a greater range of values, where


each value enables interrupt requests that specify a higher level,
while blocking ones from the same or lower level.

• Assigning different priorities to interrupt requests can be useful in


trying to balance system throughput versus interrupt latency: some
kinds of interrupts need to be responded to more quickly than others,
but the amount of processing might not be large, so it makes sense
to assign a higher priority to that kind of interrupt.
Copyright@ Amity University
Serial Communication
• The concept of serial communication is the process of sending data
one bit at a time, sequentially, over a communication channel or
computer bus.

• Serial communication is used for all long-haul communication and


most computer networks, where the cost of cable and synchronization
difficulties make parallel communication impractical

• Serial communication may be taken place through:


1. Serial Buses
2. Parallel Buses

Copyright@ Amity University


Serial Communication

Copyright@ Amity University


Modes of Serial Communication
• SERIAL BUSES
Integrated circuits are more expensive when they have more
pins. To reduce the number of pins in a package, many ICs use
a serial bus to transfer data when speed is not important.

• Parallel Buses
The communication links across which computers—or parts of
computers—talk to one another may be either serial or parallel.
A parallel link transmits several streams of data (perhaps
representing particular bits of a stream of bytes) along multiple
channels (wires, printed circuit tracks, optical fibres, etc.)

Copyright@ Amity University


Asynchronous Serial
Communication
• Asynchronous serial communication describes an asynchronous,
serial transmission protocol in which a start signal is sent prior to each
byte, character or code word and a stop signal is sent after each code
word.

• The start signal serves to prepare the receiving mechanism for the
reception and registration of a symbol and the stop signal serves to
bring the receiving mechanism to rest in preparation for the reception of
the next symbol.

Copyright@ Amity University


Synchronous Serial Communication
• As its name implies, synchronous communication takes place between
a transmitter and a receiver operating on synchronized clock.

• In a synchronous system, the communication partners have a short


conversation before data exchange begins. In this conversation, they
align their clocks and agree upon the parameters of the data transfer,
including the time interval between bits of data.

• Any data that falls outside these parameters will be assumed to be


either in error or a placeholder used to maintain synchronization.
(Synchronous lines must remain constantly active in order to maintain
synchronization)
Copyright@ Amity University
Advantage & Disadvantage of
Synchronous Communication
• Main advantage of Synchronous data communication is the high
speed. The synchronous communications required high-speed
peripherals/devices and a good-quality, high bandwidth
communication channel.

• The disadvantage include the possible in accuracy. Because when


a receiver goes out of Synchronization, loosing tracks of where
individual characters begin and end. Correction of errors takes
additional time.

Copyright@ Amity University


Interrupts
A computer system must provide a method for allowing
mechanisms to interrupt the normal processing.
Interrupts improve processor efficiency
Most external devices are much slower than the processor
and „busy waiting‟ takes up too many resources.
Examples:
External interrupts:
Timing device, Circuit monitoring the power supply, I/O device
requesting data or completed data transfer etc. Timeout errors.
Internal interrupts (caused by an exception condition).
Illegal use of an instruction or data (traps)
example: register overflow, attempt to divide by zero, invalid op
code, stack overflow etcTimer: OS system can perform operations
on a regular basis.
Software Interrupts – Special call instruction that behaves like an
interrupt.
Copyright@ Amity University
Benefits of Interrupts
No Interrupts Interrupts -Short I/O wait.
Short I/O – the I/O
1 4 1 4 operation is
WRITE I/O Command WRITE I/O Command
completed within the
5 2a time it takes to
2 Interrupt Handler

END
2b
5
execute instructions
WRITE WRITE in the program that
END

3
3a occur before the
3b
next I/O command.
WRITE WRITE
The processor is
No Interrupts
kept busy the whole
Long I/O - The „next‟
1 4 5 2 4 5 3
time.
I/O command comes
Interrupts -Short I/O wait. before first I/O has
1 4 2a 5 2b 4 3a 5 3b completed.
Processor still needs
Interrupts -Long I/O wait. (More realistic!) to wait. Some time is
1 4 5 4 5
2 3
saved !
Copyright@ Amity University
An example
Busy Wait:
Consider a computer that can execute two
instructions that read the status register and check
the flag in 1 µs.
Input device transfers data at an average rate of 100
bytes per second – equivalent to one byte every
10,000 µs.
The CPU will check the flag 10,000 times between
each transfer.
Interrupt Driven:
CPU could use this time to perform other useful
processing.

Copyright@ Amity University


Interrupt Cycle

The interrupt cycle is added to the instruction cycle.

Processor checks for interrupt indicated by an interrupt flag.

If there is NO interrupt  Fetch next instruction

If there is an interrupt:
Suspend operation of the program
Save its context
Set PC to start address of the interrupt handler
Process the interrupt
Restore the context of the original program and continue its
execution.

Copyright@ Amity University


Interrupt Cycle

Copyright@ Amity University


Instruction Cycle with Interrupts

Following each execute cycle:


Check for interrupts
Handle active interrupts

Copyright@ Amity University


Instruction Cycle with Interrupts
Disable interrupts
Processor will ignore further interrupts whilst processing one
interrupt
Interrupts remain pending and are checked after first interrupt
has been processed
Interrupts handled in sequence as they occur

Define priorities
Low priority interrupts can be interrupted by higher priority
interrupts
When higher priority interrupt has been processed, processor
returns to previous interrupt

Copyright@ Amity University


Handling Multiple Interrupts
Sequential approach – once an
interrupt handler has been
started it runs to completion
(+) Simpler
(-) Does not handle priority interrupts well
Example: Incoming data might be lost.

Nested approach – a higher priority


device can interrupt a lower
priority one.
(+) More complex
(-) Interrupts get handled in
order of priority.
Copyright@ Amity University
Priority Interrupts
Polling
• One common branch address for all interrupts.
• Interrupt sources polled in priority sequence.
• If an interrupt signal is „on‟, control branches to a service routine
for this source.
• (-) Time overhead to handle many interrupts can be excessive.
• The operation can be sped up with a hardware priority-interrupt
unit.

Daisy-Chain Priority
• Hardware solution
• Serial connection of all devices that request interrupts.
• Device with the highest priority takes first position, 2nd highest
takes 2nd position etc.
• Interrupt request line shared by all devices.
Copyright@ Amity University
Daisy-chain Priority Interrupt
A Serial Approach

Processor data bus


VAD 1 VAD 2 VAD 3

Device 1 Device 2 Device 3


PI P0 PI P0 PI P0

INT
Interrupt Request CPU

Interrupt Acknowledge INTACK

Copyright@ Amity University


One stage of the daisy-chain Priority Arrangement

PI
Priority In
. . Vector Address

Interrupt
request
from device S Q
RF . Priority Out
PO

Delay PI RF PO Enable
Open-collector 0 0 0 0
inverter
Interrupt request to 0 1 0 0
CPU
1 0 1 0
From: Computer System Architecture, Morris 1 1 0 1
Mano
Copyright@ Amity University
Parallel Priority Interrupt
Uses a register – whose bits are set separately by the
interrupt signal from each device.
Priority established according to the position of bits in
the interrupt register.
A mask register is used to control the status of each
interrupt request. Mask bits set programmatically.
Priority encoder generates low order bits of the VAD,
which is transferred to the CPU.
Encoder sets an interrupt status flip-flop IST whenever a
non-masked interrupt occurs.
Interrupt enable flip-flop provides overall control over the
interrupt system.
Copyright@ Amity University
Parallel Priority Interrupt Hardware
Interrupt
Register

Disk I0
0

Priority Encoder
Printer
I1 y
1
x
Reader
I2
From: Computer System Architecture, Morris

2 0

Keyboard 0
I3
3
0
0
0
0
IEN IST 0
Enable
1

2
Interrupt to CPU
Mano

3
INTACK from CPU
Mask
Register
Copyright@ Amity University
Priority Encoder
Circuit that implements the priority function.
Logic – if two or more inputs arrive at the same time, the
input having the highest priority will take precedence.
Inputs Outputs
I0 I1 I2 I3 d Y IST
1 d d d 0 0 1
0 1 d d 0 1 1
0 0 1 d 1 0 1
0 0 0 1 1 1 1
0 0 0 0 d d 0

Boolean functions
X = I‟0I‟1 Y = I‟0I1 + I‟0I‟2 IST = I0 + I1 + I2 + I3
Copyright@ Amity University
Interrupt Cycle
The Interrupt enable flip-flop (IEN) can be set or cleared
by program instructions.
A programmer can therefore allow interrupts (clear IEN)
or disallow interrupts (set IEN)
At the end of each instruction cycle the CPU checks IEN
and IST. If either is equal to zero, control continues with
the next instruction. If both = 1, the interrupt is handled.
Interrupt micro-operations:
SPSP – 1 (Decrement stack pointer)
M[SP]  PCc Push PC onto stack
INTACK  1 Enable interrupt acknowledge
PC VAD Transfer vector address to PC
IEN 0 Disable further interrupts
Go to fetch next instruction
Copyright@ Amity University
Software Routines for handling Interrupts
Software routines used to service JMP DISK Program to service
interrupt requests and control interrupt magnetic disk.
hardware registers. JMP PRINTER
Each device has its own service program JMP READER Program to service
reached through a jump instruction stored line printer.
JMP KEYBOARD
at the assigned vector address.
Program to service
Example: character reader.
Keyboard sets interrupt bit whilst CPU
is executing instruction at location 749. Main program Program to service
At the end of the instruction, 750 is pushed Keyboard.
onto the stack, the VAD for the keyboard is
taken off the bus and placed into the PC.
Stack
Control is passed to the keyboard routine.
Once completed, PC is replaced with original
address of next instruction (750)

Copyright@ Amity University


Small Group Activity
Consider a computer system that contains an I/O module controlling a simple keyboard/printer
teletype. The following registers are contained in the processor and connected directly to the
system bus:
INPR: Input Register, 8 bits
OUTR: Output Register, 8 bits
FGI: Input Flag, 1 Bit
FGO: Output Flag, 1 Bit
IEN: Interrupt Enable, 1 Bit
Keystroke input from the teletype and printer output to the teletype are controlled by the I/O
module. The teletype is able to encode an alphanumeric symbol to an 8-bit word and decode
an 8-bit word into an alphanumeric symbol.
a. Describe how the processor using the first four registers listed in this problem, can
achieve I/O with the teletype.
b. Describe how the function can be performed more efficiently by also employing IEN.
IF TIME: draw the circuit diagram for the priority encoder in the parallel priority interrupt
hardware diagram.

Copyright@ Amity University


Interconnection Structures
Memory: Outputs data. Inputs read,
write, and timing signals, addresses,
and data.

I/O Module. Outputs data & interrupt


signals. Inputs control signals, data,
and addresses.

CPU: Outputs address, control signals,


and data. Inputs instructions data, and
interrupt signals.

Copyright@ Amity University


Bus Interconnection
Communication pathway connecting two or more devices.
Shared transmission medium - usually broadcast.
Typically 50 – 100s of separate lines divided into three
functional groups:
Data lines
• At this level „data‟ and „instruction‟ are synonymous.
• Width is a key determinant of performance.
(Example: 32 bit words, data bus 16 bits  2 cycles to transmit
one word).
Address lines
• Identify source or destination of data (ie address in memory)
• Width determines maximum memory capacity of system (ie 8080
has 16 bit address  64K address space). Control lines
Control lines
• Control and timing signals (read, write, ack, clock)
Copyright@ Amity University
Bus Interconnection

– Parallel lines on circuit boards


– Ribbon cables
– Strip connectors on mother boards
– Sets of wires

Copyright@ Amity University


Single Bus Problems
Lots of devices on one bus leads to:
Propagation delays
Long data paths mean that co-ordination of bus use can
adversely affect performance
If aggregate data transfer approaches bus capacity
Most modern systems have at least 4 busses to solve
this problem:
Processor bus
Cache bus
Dedicated bus for accessing system cache.
Local I/O bus
High speed I/O bus for connecting performance critical
peripherals such as high-speed networks, disk storage devices.
Standard I/O bus
Connects slower peripherals such as mouse & modems etc.
Copyright@ Amity University
Traditional ISA (with Cache)

Copyright@ Amity University


High Performance Architecture

Copyright@ Amity University


Elements of Bus Design
Type
Dedicated vs. Multiplexed
Dedicated by functionality ie address vs. data or dedication to a
physical subset of components.
Arbitration Method
Only one module can have control of the bus at any one time.
Centralized vs. Distributed
Timing
Synchronous vs. Asynchronous
Bus Width
Address
Data
Data Transfer Type
Read, Write, Read-modify-write, Read-after-write, Block
Copyright@ Amity University
Bus Arbitration
Hardware arbitration
Serial arbitration – daisy chain Bus Bus Bus Bus
Parallel arbitration arbiter 1 arbiter 2 arbiter 3 arbiter 4
Dynamic arbitration algorithms
System can change the priority

Bus Ready
of the devices during normal operation.
Time slice – fixed length time slice of Priority Encoder
bus time offered sequentially to each
processor in round robin fashion.
Polling – address of each device in
turn placed on polling lines. A device 2 X 4 Decoder
may activate bus busy if it is being polled.
LRU – Least recently used.
FIFO – First in first out.
Hardware for parallel arbitration
Rotating Daisy-chain – dynamic
extension of the daisy chain.

Copyright@ Amity University


Synchronous Timing
Occurrence of events on the bus coordinated by
a clock.

Bus includes a clock line.

Clock transmits alternating 1s and 0s of equal


duration.

A single 1-0 transmission = 1 clock cycle.

All events start at the beginning of a clock cycle.

Copyright@ Amity University


Timing of Synchronous Bus Operations

Place stable address on the


line during first clock signal.
Once the address stabilizes
an address enable signal is
Stable
issued.
Address Read: Read enable signal
activated at start of next
Valid Data In cycle.
Memory module recognizes
address and after 1 cycle
Valid Data Out
places data on bus.
Write is similar but address
+ data is placed on the bus
early.
Copyright@ Amity University
Timing of Asynchronous Bus Operations

Occurrence of one event follows the occurrence of a previous event.


For read – place status and address on the line.
Once stabilized, place a read signal on the bus.
Memory decodes address, and places data on the bus.
Processor sends and ―ACK‖ – all lines can then be dropped.
Copyright@ Amity University
Data Transfer Type
Bus supports various data transfer types
Write (Master to slave)
Read (Slave to master)

Multiplexed address/data bus


Write (Cycle 1 : Address, Cycle 2 : Data)
Read (Cycle 1 : Address, Delay, Cycle ?: Data)

Non-multiplexed address/data bus


Write (Address & Data both sent in same cycle).
Read (Address followed by data once address is stabilized)

Other types of transfer include:


Read after write
Block data transfer (Address + multiple blocks of data)

Copyright@ Amity University


Memory System Overview
Memory systems can be classified according to the
following key characteristics:
Location
• External – Peripherals such as disk and tape etc.
• Internal – Main memory, Registers, Cache
Capacity
Unit of Transfer
Access Method
Performance
Physical Type (What it is made of)
Physical Characteristics (How it behaves)
Organization

Copyright@ Amity University


Memory
There are different types of computer memory tasked to store different
types of data. They also have different capabilities and specialties when
it comes to storing necessary data inside the computer.

The best known computer memory is the RAM, otherwise known as


Random Access Memory. It is called random access because any
stored data can be accessed directly if you know the exact row and
column that intersect a certain memory cell. In this type of computer
memory, data can be accessed in any order. RAM's exact opposite is
called SAM or Serial Access Memory, which stores data in a series of
memory cells that can only be accessed in order. It operates much like
a cassette tape where you have to go through other memory cells
before accessing the data that you are looking for.

Copyright@ Amity University


Memory
Other types of computer memory include the ROM or Read Only
Memory. ROM is an integrated circuit already programmed with
specific data that cannot be modified or changed, hence the name
"Read Only". There is also another type of computer memory
called Virtual Memory. This type of memory is a common
component in most operating systems and desktops. It helps the
computers RAM to be freed up with unused applications to make
way for loading current applications being used. It works simply by
checking for data stored in RAM not being used recently and have
it stored in the computer's hard disk, thereby freeing valuable
space in RAM for loading other applications. A virtual memory will
make a computer think that it has almost unlimited RAM inside it.

Copyright@ Amity University


Memory

Copyright@ Amity University


Associative Memory
• A memory that is capable of determining whether a given datum – the
search word – is contained in one of its addresses or locations.

• This may be accomplished by a number of mechanisms. In some cases


parallel combinational logic is applied at each word in the memory and a
test is made simultaneously for coincidence with the search word.

• In other cases the search word and all of the words in the memory are
shifted serially in synchronism; a single bit of the search word is then
compared to the same bit of all of the memory words using as many
single-bit coincidence circuits as there are words in the memory.

• Small parallel associative memories are used in cache memory and virtual
memory mapping applications.
Copyright@ Amity University
Associative Memory
• A type of memory closely associated with neural networks

– Bidirectional Associative Memory

– Autoassociative memory

– Hopfield net

• Content-addressable memory, a type of computer memory

• Transderivational search, an aspect of human memory

Copyright@ Amity University


Associative Memory

Copyright@ Amity University


Associative Memory
Most memory devices store and retrieve data by addressing specific memory
locations. As a result, this path often becomes the limiting factor for systems that rely
on fast memory access.

The time required to find an item stored in memory can be reduced considerably if the
stored data item can be identified for access by the content of the data itself rather
than by its address. Memory that is accessed in this way is called content-addressable
memory (CAM) or associative memory.

CAM provides a performance advantage over other memory search algorithms by


comparing the desired information against the entire list of prestored entries
simultaneously, often resulting in an order-of-magnitude reduction of search time.

CAMs are an outgrowth of RAM which is an integrated circuit that stores data
temporarily.

Copyright@ Amity University


Virtual Memory
• “Main Memory” commonly refers to Physical Memory, although a
computer uses an operating system-imposed Virtual Memory in addition
to physical memory.

• Most operating systems have a form of memory management that caters for memory
needs beyond a computer system’s physical memory through the use of a Swap File.

• There is a need for such memory management as operating systems themselves occupy
a significant portion of physical memory.

• A Swap File is a file located on a computer’s hard disk drive (HDD) that acts as an
extension to physical memory. However, the HDD has much slower access times than
any of the forms of memory discussed above. Hence, information is swapped between
the main memory and the swap file to ensure that the more frequently used information is
located in the main memory for faster access speeds.

Copyright@ Amity University


Virtual Memory
• Virtual memory is an integral part of a computer architecture; all
implementations require hardware support, typically in the form of a
memory management unit built into the CPU. Consequently, older
operating systems generally have no virtual memory functionality, though
notable exceptions include the Atlas, B5000, IBM System/360 Model 67,
IBM System/370 mainframe systems of the early 1970s.

• Embedded systems and other special-purpose computer systems which


require very fast and/or very consistent response times may opt not to use
virtual memory due to decreased determinism; virtual memory systems
trigger unpredictable interrupts that may produce unwanted "jitter" during
I/O operations. This is because embedded hardware costs are often kept
low by implementing all such operations with software (a technique called
bit-banging) rather than with dedicated hardware
Copyright@ Amity University
Virtual Memory

Copyright@ Amity University


Virtual Memory
• Systems that employ virtual memory:

1. use hardware memory more efficiently than systems without virtual


memory.

2. make the programming of applications easier by:

– hiding fragmentation.

– delegating to the kernel the burden of managing the memory hierarchy;


there is no need for the program to handle overlays explicitly.

– obviating the need to relocate program code or to access memory with


relative addressing.

Copyright@ Amity University


Cache Memory
• A CPU cache is a cache used by the central processing unit of
a computer to reduce the average time to access memory.

• The cache is a smaller, faster memory which stores copies of


the data from the most frequently used main memory locations.

• As long as most memory accesses are cached memory


locations, the average latency of memory accesses will be
closer to the cache latency than to the latency of main memory.

• When the processor needs to read from or write to a location in


main memory, it first checks whether a copy of that data is in the
cache. If so, the processor immediately reads from or writes to
the cache, which is much faster than reading from or writing to
main memory.
Copyright@ Amity University
Cache Memory Hierarchy

Copyright@ Amity University


The operation of cache memory
1. Cache fetches data 2. CPU checks to see
from next to current whether the next
addresses in main instruction it requires is in
memory cache

Cache
Main
Memory CPU
Memory
(SRAM)
(DRAM)

3. If it is, then the


4. If not, the CPU has to
instruction is fetched from
fetch next instruction
the cache – a very fast
from main memory - a
position
much slower process

= Bus connections
Copyright@ Amity University
Cache Memory
• This form of memory can be considered as an intermediary between the main physical RAM
and the CPU. The cache makes any data frequently used by CPU instantly available. If the
required information is not located in the cache, a fetch is made from the main memory.

• There are two levels of cache: Level 1 Cache (primary cache) and Level 2
Cache (secondary cache).

1. Level 1 cache is built directly on the CPU, just like the registers. It is small in size, ranging
anywhere between 2 kilobytes (KB) and 128KB. As this cache is closer to the CPU than
level 2 cache, its transfer speeds are much faster as a result.

2. Level 2 cache is usually situated in close proximity to, but off, the CPU chip. However,
there are certain systems where the cache is built directly onto the CPU itself as like the
level 1 cache. The size of level 2 cache ranges from 256KB to 2 megabytes (MB). Both
levels of cache use Static Random Access Memory (SRAM) to hold the data.

Copyright@ Amity University


Cache Memory

Copyright@ Amity University


Cache Memory
• Cache memory is random access memory (RAM) that a computer
microprocessor can access more quickly than it can access regular RAM.

• As the microprocessor processes data, it looks first in the cache memory and if it
finds the data there (from a previous reading of data), it does not have to do the
more time-consuming reading of data from larger memory.

• Cache memory is sometimes described in levels of closeness and accessibility


to the microprocessor.

• An L1 cache is on the same chip as the microprocessor. (For example, the


PowerPC 601 processor has a 32 kilobyte level-1 cache built into its chip.)

• L2 is usually a separate static RAM (SRAM) chip. The main RAM is usually a
dynamic RAM (DRAM) chip.
Copyright@ Amity University
Cache Memory

Copyright@ Amity University


Thank You

Copyright@ Amity University

You might also like