Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

ASSIGNMENT 1

MICROPROCESSOR SYSTEMS

TEACHER : HARIS ANIS

NAME: CLASS: REGISTRATION NUMBER:

MOHAMMAD UZAIR ASLAM BICSE 7A 161

Q1

Marks: 6

Compare and contrast the Intels IA-32 architecture with Intels P6 architecture. Take Intel 80386 as an example case of IA-32 while Pentium Pro as an example case of P6 architecture.
Solution:

IA-32 architecture:
The IA-32 instruction set was introduced in the Intel 80386 microprocessor in 1986 and remains the basis of most PC microprocessors over twenty years later. Even though the instruction set has remained intact, the successive generations of microprocessors that run it have become much faster. Within various programming language directives, IA-32 is still sometimes referred to as the "i386" architecture Intel Corporation is the inventor and the biggest supplier of IA-32 processors. The second biggest supplier is AMD. As of 2011, both Intel and AMD have moved to x86-64, but still produce IA-32 processors such as Intel Atom and Geode. VIA Technologies continues to produce the VIA C3/VIA C7 family of "pure" IA-32 devices. For a time, Transmeta produced IA-32 processors.

IA-32 had a 32 bit processor and the facility of paging translation unit, and was the first of its kind to develop the use of virtual memory. It also supported hardware debugging. IA-32 offered three operating modes. 1. Real mode. 2. Protected mode. 3. Virtual mode. The protected mode which was already available in 16-bit processors was increased to address up to 4GB of memory. The most important feature to be added to IA-32 was the introduction of 32-bit flat memory model. Which included memory addressing in low level software design in such a fashion so as the cpu could address the memory directly.

Intels P6 architecture:

The P6 microarchitecture is the sixth generation Intel x86 microarchitecture, implemented by the Pentium Pro microprocessor that was introduced in November 1995. It is sometimes referred to as i686. It was succeeded by the NetBurst microarchitecture in 2000, but eventually revived in the Pentium M line of microprocessors. The successor to the Pentium M variant of the P6 microarchitecture is the Core microarchitecture The P6 microarchitecture is the 6 generation of Intel X86 family. It was implemented by the Pentium pro micro processor. The important development of this architecture was that it used 5.5 million transistors in the CPU. The Pentium pro had a completely new micro architecture known as the P6, it has a 12-stage superpipelined architecture which uses an instruction pool. The Pentium Pro pipeline had extra decode stages to dynamically translate IA-32 instructions into buffered micro-operation sequences which could then be analysed, reordered, and renamed in order to detect parallelizable operations that may be issued to more than one execution unit at once. It also had a wider 36 bit addressing bus allowing more memory addresses to be carried at a time.An important feature is that X86 instructions are decoded in 118-bit micro-ops which are RISC like, so while the architecture over all is CISC it is dividing its functions into smaller RISC like structures. The general decoder can create up to 4 micro ops per cycle.
th

Comparison:
Overall the P6 micro architecture was far superior than the IA-32 one, it offered more transistors , superior processing power , and a performance increase of 25 to 30% from its predecessors. However P6 was costly and one had to buy the recommended os with it as well to gain optimal performance. However the most standout advantage and feature of the P6 was the on board L2 cache, previously only L1 was available on board but with the advent of P6 2 caches L1 and L2 both were on board increasing the performance by many fold. Therefore Intels P6 micro architecture was far superior than the Intels IA-32 architecture.

Q2

Marks: 4

Describe the role of Global Descriptor Tables (GDTs) & Local Descriptor Tables (LDT) in the context of Intel Memory Management.

Logical memory addressing:


The Global Descriptor Table (GDT) is specific to the IA32 architecture. It contains entries telling the CPU about memory segments. The GDT is supposed to contain global memory segments. There is only a single GDT in a processor while there may be many LDTs. Every memory access which a program can perform always goes through a segment. On the 386 processor and later, because of 32-bit segment offsets and limits, it is possible to make segments cover the entire addressable memory, which makes segment-relative addressing transparent to the user. Logical addressing using segmentation has been common on Intel processors almost from the beginning. To date, there is no way to operate an Intel compatible processor without segmentation, a segment is usually 64k in size and to address a memory one needs to have a two part address composed of two word (16 bits) separated by a colon such as 0xDEAD:BEEF. The first word is called the segment selector. The second word is called the offset. So 0x0:1000 refers to the first segment in memory, at an offset of 0x1000.

Since the 80386 processor, we have had a much more flexible implementation of segmentation. A segment can be any size needed, up to the addressable limit of 4 gigabytes. Much like there are tables to keep track of our linear memory pages, there are table structures to keep track of our logical segments of memory. Two types of tables exist: Global Descriptor Table (GDT) Local Descriptor Table (LDT) Each processor must have one and only one Global Descriptor Table (GDT) and it cannot be larger than 64k. The upside is that we can have one Local Descriptor Table per process so that each process on our computer can have its very own logical memory address space, just as each process can have its own linear address space. GDT looks almost exactly like any LDT with one exception: the first entry in the GDT must always be filled with zeros. This is for several purposes such as debugging, access checks and so on. In any given LDT,

the entry 0 can be anything we want. The GDT and LDTs contain segment descriptors that allow us to arbitrarily define the base address, size and attributes of a segment for use by a process.

The figure below illustrates a few segment descriptors pointing to arbitrary linear memory locations. The empty boxes of linear memory represent arbitrarily sized segments. The LDT boxes are individual segment descriptors. The numbers inside the segment descriptors are called segment selectors. Specifying a logical address is the same as it was 20 years ago, except the offset could be just about anything. In the above mentioned example the segment selectors are incorrect. Segment selectors are still 16 bits long just as they were 20 years ago. The difference is that now we use the first three bits of a selector for some extra information like priviledge levels and indicating which table to use. The following figure illustrates two segment selectors at the binary level.

The grayed out areas in the selectors are a Requested Priviledge Level field occupying 2 bits of space in the selector. The Index section is simply an index into the descriptor table containing the segment descriptor for the selector. The main difference between these two selectors is bit 2, the Table Indicator (TI) bit. When set, the selector is for a segment contained in the current Local Descriptor Table. When cleared, the selector indexes a segment in the GDT.

Q3 Comparison between CISC and RISC architecture, and give your final analysis which is better and why?

CISC:
A complex instruction set computer (CISC) , is a computer where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC).
[1]

CISC stands for Complex Instruction Set Computer. Most PC's use CPU based on this architecture. For instance Intel and AMD CPU's are based on CISC architectures.

Typically CISC chips have a large amount of different and complex instructions. The philosophy behind it is that hardware is always faster than software; therefore one should make a powerful instruction set, which provides programmers with assembly instructions to do a lot with short programs.

RISC:
Reduced instruction set computing, or RISC , is a CPU design strategy based on the insight that simplified (as opposed to complex) instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction [1] set computer (also RISC). There are many proposals for precise definitions, but the term is slowly being replaced by the more descriptive load-store architecture. Well-known RISC families include DEC Alpha, AMD 29k, ARC, ARM, Atmel AVR, Blackfin, MIPS, PA-RISC, Power (including PowerPC), SuperH, and SPARC RISC stands for Reduced Instruction Set Computer. RISC chips evolved around the mid-1980 as a reaction at CISC chips. The philosophy behind it is that almost no one uses complex assembly language instructions as used by CISC, and people mostly use compilers which never use complex instructions. Apple for instance uses RISC chips. Therefore fewer, simpler and faster instructions would be better, than the large, complex and slower CISC instructions. However, more instructions are needed to accomplish a task. Another advantage of RISC is that - in theory - because of the more simple instructions, RISC chips require fewer transistors, which makes them easier to design and cheaper to produce.

Comparison
There is considerable disagreement among experts about which architecture is better. Some say that RISC is cheaper and faster and therefor the architecture of the future. Others note that by making the hardware simpler, RISC puts a greater weight on the software. Software needs to become more complex. Software developers need to write more lines for the same tasks. Therefore they argue that RISC is not the architecture of the future, since conventional CISC chips are becoming faster and cheaper anyway. 1. Commonly CISC chips are relatively slow (compared to RISC chips) per instruction, but use little (less than RISC) instructions.

2.

If we forget about the embedded market and mainly look at the market for PC's, workstations and servers, at least 75% of the processors are based on the CISC architecture. Most of them the x86 standard (Intel, AMD, etc.), but even in the mainframe territory CISC is dominant via the IBM/390 chip.

3. RISC and CISC architectures are becoming more and more alike. Many of today's RISC chips support just as many instructions as yesterday's CISC chips. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the 601 is considered a RISC chip, while the Pentium is definitely CISC. Furthermore today's CISC chips use many techniques formerly associated with RISC chips.

x86
An important factor is also that the x86 standard, as used by for instance Intel and AMD, is based on CISC architecture. X86 is the standard for home based PC's. Windows 95 and 98 won't run at any other platform. Therefore companies like AMD an Intel will not abandoning the x86 market just overnight even if RISC was more powerful.

4. Changing their chips in such a way that on the outside they stay compatible with the CISC x86
standard, but use a RISC architecture inside is difficult and gives all kinds of overhead which could undo all the possible gains. Nevertheless Intel and AMD are doing this more or less with their current CPU's. Most acceleration mechanisms available to RISC CPUs are now available to the x86 CPU's as well.

5. In the x86 the competition is killing, prices are low, even lower than for most RISC CPU's.
Although RISC prices are dropping also a, for instance, SUN UltraSPARC is still more expensive than an equal performing PII workstation is. 6. Equal that is in terms of integer performance. In the floating point-area RISC still holds the crown. However CISC's 7 generation x86 chips like the K7 will catch up with that.
th

You might also like