Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

 

History of Embedded Systems 


​By Amal Sajeev 
 
The history of embedded systems and algorithms begins with the generation of 
transistors and low level computer languages, respectively. Development of 
miniature embedded systems was facilitated by the arrival of semiconductor diodes 
and nano-technology, and ever since, the technology of miniature embedded 
systems has only improved exponentially ever since.  
 
The origins of the microprocessor and the microcontroller can be traced back to 
the MOS integrated circuit, an integrated circuit chip that was fabricated from 
MOSFETs, or Metal Oxide Semiconductor Field-Effect Transistors, which were 
developed in the 1960s. MOS chips managed to reach higher transistor density, and 
eventually managed to achieve a lower manufacturing cost than Bipolar chips.  
 
The first multi-chip microprocessors, a title usually attributed to the Four Phase 
Systems AL1 and the Garret AiReseach MP944 systems, developed in 1969 and 1971 
respectively, were built using multiple MOS LSI chips, where hundreds of 
transistors could integrated to a single MOS chip. This was called Large Scale 
Integration. The first single chip microprocessor was the Intel 4004, developed by 
Federico Faggin, who collaborated with Intel engineers Marcian Hoff and Stan 
Mazor, and Busicom engineer Masatoshi Shima, and was released in 1971. These 
were the initial technological developments that lead to the development of 
superior embedded systems over time. 
 
The Apollo Guidance Computer, which had a 2048 word RAM, 16-bit word length 
with 15 data bits and one parity bit, was one of the first successful embedded 
systems that was successfully implemented in several applications. It was developed 
by the MIT Instrumentation Laboratory in 1965, for the Apollo program. The 
software of the AGC was stored on a device called the core rope memory, which 
consisted of wires twisted around metallic cores. The performance of this system 
would be comparable to the first generation of home computers such as the Apple 
II, or the TRS-80. It was considered the most insecure component of the Apollo 
project, since it used Monolithic Integrated Circuits to reduce the mass of the 
system, a technology that was rather recent at the time. Since the 1960s, there has 
been a dramatic increase in processing power and speed in embedded systems, 
combined with a proportional decrease of manufacturing costs, which allowed most 
commercial computers to implement them. 
 
The Intel 4004, which was designed for small systems such as calculators, still 
required external memory and support chips. By the early 80s, manufacturers 
managed to integrate components such as input/output systems and memory into 
the same chip that served as the microcontroller, called the Processor. Analog 
components, especially knob-based ones such as potentiometers and variable 
capacitors, were replaced with digital alternatives to cut down costs. 
 
A modern low-cost microcontroller is programmed to serve the same functions 
that would have previously required several expensive components. Though this 
process has resulted in more complex embedded systems, the complexity is largely 
contained within the microcontroller itself. Most of the design effort has been 
reduced to software, and the number of additional components required has been 
largely reduced as well. Prototypes, as well as tests of new software and systems can 
now be completed with comparatively minimal costs now, and do not require the 
construction of a new circuit using an embedded processor. 

You might also like