Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 6

Memory

In computing, memory refers to the physical devices used to store programs (sequences of instructions) or data (e.g. program state information) on a temporary or permanent basis for use in a computer or other digital electronic device. The term primary memory is used for the information in physical systems which are fast (i.e. RAM), as a distinction from secondary memory, which are physical devices for program and data storage which are slow to access but offer higher memory capacity. Primary memory stored on secondary memory is called "virtual memory". The term "storage" is often (but not always) used in separate computers of traditional secondary memory such as tape, magnetic disks and optical discs (CD-ROM and DVD-ROM). The term "memory" is often (but not always) associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary memory but also other purposes in computers and other digital electronic devices. There are two main types of semiconductor memory: volatile and nonvolatile. Examples of non-volatile memory are flash memory (sometimes used as secondary, sometimes primary computer memory) and ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory (typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity per area unit than DRAM). The semiconductor memory is organized into memory cells or bi-stable flip-flops, each storing one binary bit (0 or 1). The memory cells are grouped into words of fix word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word and do not include an addressing mechanism.

Volatile memory
Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either Static RAM (see SRAM) or dynamic RAM (see DRAM). SRAM retains its contents as long as the power is connected and is easy to interface to but uses six transistors per bit. Dynamic RAM is more complicated to interface to and control and needs regular refresh cycles to prevent its contents being lost. However, DRAM uses only one transistor and a capacitor per bit, allowing it to
MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

reach much higher densities and, with more bits on a memory chip, be much cheaper per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less.

Non-volatile memory
Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disks, floppy discs and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards.

Random-access memory (RAM)


Random access memory (RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order with a worst case performance of constant time. RAM is often associated with volatile types of memory (such as DRAM memory modules), where its stored information is lost if the power is removed. Many other types of non-volatile memory are RAM. The first RAM modules to come into the market were created in 1951 and were sold until the late 1960s and early 1970s. Types of RAM The two main forms of modern RAM are Static RAM (SRAM) and Dynamic RAM (DRAM). In static RAM, a bit of data is stored using the state of a flip-flop. This form of RAM is more expensive to produce, but is generally faster and requires less power than DRAM and, in modern computers, is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Synchronous dynamic random access memory (SDRAM) is dynamic random access memory (DRAM) that is synchronized with the system bus. Classic DRAM has an asynchronous interface, which means that it responds as
MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

quickly as possible to changes in control inputs. SDRAM has a synchronous interface, meaning that it waits for a clock signal before responding to control inputs and is therefore synchronized with the computer's system bus. The clock is used to drive an internal finite state machine that pipelines incoming commands. Double data rate synchronous dynamic random-access memory (DDR SDRAM) is a class of memory integrated circuits used in computers. DDR SDRAM (sometimes referred to as DDR1 SDRAM) has been superseded by DDR2 SDRAM and DDR3 SDRAM. Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals.

Read-only memory
Read-only memory (ROM) is a class of storage medium used in computers and other electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty, so it is mainly used to distribute firmware (software that is very closely tied to specific hardware and unlikely to need frequent updates). Types

Semiconductor based
Classic mask-programmed ROM chips are integrated circuits that physically encode the data to be stored, and thus it is impossible to change their contents after fabrication. Other types of non- volatile solid-state memory permit some degree of modification: 1. Programmable read-only memory (PROM), or one-time programmable ROM (OTP), can be written to or programmed via a special device called a PROM programmer. Typically, this device uses high voltages to permanently destroy or create internal links (fuses or anti-fuses) within the chip. Consequently, a PROM can only be programmed once. 2. Erasable programmable read-only memory (EPROM) can be erased by exposure to strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process that again needs higher than usual voltage applied. Repeated exposure to UV light will eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000 cycles of erasing and reprogramming. EPROM chip packages can often be identified by the prominent quartz "window" which allows UV light to enter. After programming, the window is typically covered with a label to prevent accidental erasure.
MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

Some EPROM chips are factory-erased before they are packaged, and include no window; these are effectively PROM. 3. Electrically erasable programmable read-only memory (EEPROM) is based on a similar semiconductor structure to EPROM, but allows its entire contents (or selected banks) to be electrically erased, then rewritten electrically, so that they need not be removed from the computer (or camera, MP3 player, etc.). Writing or flashing an EEPROM is much slower (milliseconds per bit) than reading from a ROM or writing to a RAM (nanoseconds in both cases). 3.1 Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be modified one bit at a time. Writing is a very slow process and again needs higher voltage (usually around 12 V) than is used for read access. EAROMs are intended for applications that require infrequent and only partial rewriting. EAROM may be used as non-volatile storage for critical system setup information; in many applications, EAROM has been supplanted by CMOS RAM supplied by mains power and backed-up with a lithium battery.

3.2 Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flash memory can be erased and rewritten faster than ordinary EEPROM, and newer designs feature very high endurance (exceeding 1,000,000 cycles). Modern NAND flash makes efficient use of silicon chip area, resulting in individual ICs with a capacity as high as 32 GB as of 2007; this feature, along with its endurance and physical durability, has allowed NAND flash to replace magnetic in some applications (such as USB flash drives). Flash memory is sometimes called flash ROM or flash EEPROM when used as a replacement for older ROM types, but not in applications that take advantage of its ability to be modified quickly and frequently.

Virtual memory
The technique which allows a program to use main memory more than what a computer really has is known as virtual memory technique. It gives the programmers an illusion that they have main memory available more than what is physically provided in the computer. The capacity of virtual memory depends on the design aspects of a processor. Virtual memory is a system where all physical memory is controlled by the operating system. When a program needs memory, it requests it from the operating system. The operating system then decides what physical location to place the memory in. Most modern operating systems that support virtual memory also run each
MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize. For example, virtual memory might contain twice as many addresses as main memory. A program using all of virtual memory, therefore, would not be able to fit in main memory all at once. Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution. To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses. In a system using virtual memory, the physical memory is divided into equally-sized pages. The memory addressed by a process is also divided into logical pages of the same size. When a process references a memory address, the memory manager fetches from disk the page that includes the referenced address, and places it in a vacant physical page in the RAM. Subsequent references within that logical page are routed to the physical page. When the process references an address from another logical page, it too is fetched into a vacant physical page and becomes the target of subsequent similar references.

Cache Memory
Cache Memor y Primary Memor y Seconda ry Memory

CPU

The cache memory is placed in between CPU and main memory. The processor is connected to the cache memory through a cache controller. It is a semiconductor memory. It consists of static RAMs. Its access time is about 10ns (nanosecond) 1 nanosecond=10-9 seconds. This is much less than that of the main memory. The access time of the main memory is about 50ns. The capacity of the cache memory is 2 to 3 percent of that of the main memory. It stores instruction codes and data, which are to be currently executed by the CPU. It is used to reduce the average access time for instructions and data, which are normally stored in the main memory. A cache memory also needs a cache controller. Cache controller ICs are available.
MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

There are two types of cache schemes, write- through and write back. In write through cache the main memory is updated each time the CPU writes into the cache. The advantage of write-through cache is that the main memory always contains the same data as the cache contains. The write-back scheme increases performance by reducing the utilization of buses and preventing unnecessary bottlenecks in the system. This scheme is faster and hence it is preferred. A cache memory, often simply called a cache, is a fast local memory used as a buffer for a more distant, larger and slower memory in order to improve the average memory access speed. Although relatively small, cache memories rely on the principle of locality that indicates that if they retain information recently referenced, they will tend to contain much of the information needed in the near future. The cache is a small amount of high-speed memory, usually with a memory cycle time comparable to the time required by the CPU to fetch one instruction. The cache is usually filled from main memory when instructions or data are fetched into the CPU. Often the main memory will supply a wider data word to the cache than the CPU requires, filling the cache more rapidly. The amount of information which is replaces at one time in the cache is called the line size for the cache. This is normally the width of the data bus between the cache memory and the main memory.

MOHANA S.H, USMRFG COLLEGE, KUVEMPU UNIVERSITY

You might also like