History of Programming Paradigms

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 16

A programming paradigm is a fundamental style of computer programming.

Paradigms differ in the concepts and abstractions used to represent the elements of a program (such as objects, functions, variables, constraints) and the steps that compose a computation. Programming paradigm refer to how a program is written in order to solve a problem.

History of programming paradigms Initially, computers were programmed using binary code that represented control sequences fed to the computer CPU. This was difficult and error-prone. Programs written in binary are said to be written in machine code, which is a very low-level programming paradigm. To make programming easier, assembly languages were developed. These replaced machine code functions with mnemonics and memory addresses with labels. Assembly language programming is also a low-level paradigm although it is a second generation paradigm. Although assembly language is an improvement over machine code, it is still prone to errors, and difficult to debug and maintain. Instruction below shows an assembly language program that adds together two numbers and stores the result. Label Function Address Comments LDA X Load the accumulator with the value of X ADD Y Add the value of Y to the accumulator STA Z Store the result in Z STOP Stop the program Input X: 20 Value of X = 20 Y: 35 Value of Y = 35 Z: Location for result

Although this assembly language is an improvement over machine code, it is still prone to errors and code is difficult to debug, correct and maintain. The next advance was the development of procedural languages. These are third generation languages and are also known as high-level languages. These languages are problem oriented as they use terms appropriate to the type of problem being solved. For example, COBOL (Common Business Oriented Language) uses the language of business.

FORTRAN (FORmula TRANslation) and ALGOL (ALGOrithmic Language) were developed mainly for scientific and engineering problems. Although one of the ideas behind the development of ALGOL was that it was an appropriate language to define algorithms. BASIC (Beginners All purpose Symbolic Instruction Code) was developed to enable more people to write programs. All these languages follow the procedural paradigm. That is, they describe, step by step, exactly the procedure that should be followed to solve a problem. Although this assembly language is an improvement over machine code, it is still prone to errors and code is difficult to debug, correct and maintain.

The problem with procedural languages is that it can be difficult to reuse code and to modify solutions when better methods of solution are developed. In order to address these problems, object-oriented languages (like Eiffel, Smalltalk and Java) were developed. In these languages data, and methods of manipulating the data, are kept as a single unit called an object. The only way that a user can access the data is via the object's methods. This means that, once an object is fully working, it cannot be corrupted by the user. It also means that the internal workings of an object may be changed without affecting any code that uses the object. The object-oriented programming paradigm has significantly eased the software development of complex tasks, due to the decomposition of problems into modular entities. It allows the specification of class hierarchies with its virtual class polymorphism (subtyping polymorphism), which has been a major enhancement for many different types of applications.

A further advance was made when declarative programming paradigms were developed. In these languages the computer is told what the problem is, not how to solve the problem. Given a database the computer searches for a solution. The computer is not given a procedure to follow as in the languages discussed so far.

Another programming paradigm is functional programming .Programs written using this paradigm use functions, which may call other functions (including themselves).

These functions have inputs and outputs. Variables, as used in procedural languages, are not used in functional languages. Functional languages make a great deal of use of recursion

The programming paradigms which are most widely used and implemented by various programming languages are:

Imperative programming Object-oriented programming (OO) Functional programming (FP) Generic programming (GP) Meta-programming (MP)

IMPERATIVE PROGRAMMING

In computer science, imperative programming is a programming paradigm that describes computation in terms of statements that change a program state. In much the same way as the imperative mood in natural languages expresses commands to take action , imperative programs define sequences of commands for the computer to perform. The term is used in opposition to declarative programming, which expresses what needs to be done, without prescribing how to do it in terms of sequences of actions to be taken. Functional and logical program are examples of a more declarative approach.

Procedural programming is imperative programming in which the statements are structured into procedures (also known as subroutines or functions); the terms are often used as synonyms, but the use of procedures has a dramatic impact on what imperative programs look like and how they are constructed. Heavily procedural programming, in which state changes are localized to procedures or restricted to explicit arguments and returns from procedures, is known as structured programming For example, to find the area of a rectangle the steps are Read the length Read the breadth Multiply the length by the breadth Output the result In C++ this can be coded as

cout << "Enter the length: "; cin >> Length; cout << "Enter the breadth: "; cin >> Breadth; Area = Length * Breadth; cout << "The area is " << Area << endl; Here each line of code is executed one after the other in sequence. Most procedural languages have two methods of selection. These are the IF THEN ELSE statement and the SWITCH or CASE statement. For example, in C++, we have IF (Number > 0) cout << "The number is positive."; ELSE { IF (Number = = 0) cout << "The number is zero."; ELSE cout << "The number is negative."; } In C++ multiple selections can be programmed using the SWITCH statement. For example, suppose a user enters a single letter and the output depends on that letter, a typical piece of code could be switch (UserChoice) { case 'A': cout << "A is for Apple."; break; case 'B': cout << "B is for Banana."; break; case 'C': cout << "C is for Cat."; break; default: cout << "I don't recognise that letter.";

} Repetition (or iteration) is another standard construct. Most procedural languages have many forms of this construct such as FOR NEXT REPEAT UNTIL WHILE DO A typical use of a loop is to add a series of numbers. The following pieces of C++ code add the first ten positive integers. //Using a FOR loop Sum = 0; FOR (int i = 1; i <= 10; i++) { Sum = Sum + i; } cout << "The sum is " << Sum; //Using a WHILE loop Sum = 0; i = 1; while (i <= 10) { Sum = Sum + i; i++; } cout << "The sum is " << Sum; The point to note with these procedural languages is that the programmer has to specify exactly what the computer is to do. Procedural languages are used to solve a wide variety of problems. Some of these languages are more robust than others. This means that the compiler will not let the programmer write statements that may lead to problems in certain circumstances.

Object-Orientation Programming

Object-oriented programming (OOP) is a programming paradigm that uses "objects" and their interactions to design applications and computer programs. Programming techniques may include features such as encapsulation, modularity, polymorphism, and inheritance. It was not commonly used in mainstream software application development until the early 1990s. Many modern programming languages now support OOP.

The object-oriented programming paradigm [98] introduced mechanisms required to obtain modular software design and reusability compared to universal accessibility of implementations by imperative programming. The main part of the object-oriented paradigm is related to the introduction of classes which cover basic properties of concepts to be implemented. Based on this description of properties, an object is created, which can be briefly explained by a self-governing unit of information which actively communicates with other objects. This is the main difference compared to a passive access as used in imperative programming languages

Object-oriented programming can trace its roots to the 1960s. As hardware and software became increasingly complex, quality was often compromised. Researchers studied ways in which software quality could be maintained. Object-oriented programming was deployed in part as an attempt to address this problem by strongly emphasizing discrete units of programming logic and re-usability in software. Computer programming methodology focuses on data rather than processes, with programs composed of self-sufficient modules (objects) containing all the information needed within its own data structure for manipulation.

The Simula programming language was the first to introduce the concepts underlying objectoriented programming (objects, classes, subclasses, virtual methods, coroutines, garbage collection, and discrete event simulation) as a superset of Algol. Simula was used for physical modeling, such as models to study and improve the movement of ships and their content through cargo ports. Smalltalk was the first programming language to be called "object-oriented". Fundamental concepts They are the following:

Class

Defines the abstract characteristics of a thing (object), including the thing's characteristics (its attributes)and the thing's behaviors (the things it can do, operations or features). One might say that a class is a blueprint or factory that describes the nature of something. For example, the class Dog would consist of traits shared by all dogs, such as breed and fur color (characteristics), and the ability to bark and sit (behaviors). Classes provide modularity and structure in an object-oriented computer program. A class should typically be recognizable to a non-programmer familiar with the problem domain, meaning that the characteristics of the class should make sense in context. Also, the code for a class should be relatively self-contained (generally using encapsulation). Collectively, the properties and methods defined by a class are called members. Object A pattern (exemplar) of a class. The class of Dog defines all possible dogs by listing the characteristics and behaviors they can have; the object Lassie is one particular dog, with particular versions of the characteristics. A Dog has fur; Lassie has brown-and-white fur. Instance One can have an instance of a class or a particular object. The instance is the actual object created at runtime. In programmer jargon, the Lassie object is an instance of the Dog class. The set of values of the attributes of a particular object is called its state. The object consists of state and the behaviour that's defined in the object's class. Inheritance Subclasses are more specialized versions of a class, which inherit attributes and behaviors from their parent classes, and can introduce their own. For example, the class Dog might have sub-classes called Collie, Chihuahua, and GoldenRetriever. In this case, Lassie would be an instance of the Collie subclass. Suppose the Dog class defines a method called bark() and a property called furColor. Each of its sub-classes (Collie, Chihuahua, and GoldenRetriever) will inherit these members, meaning that the programmer only needs to write the code for them once. Each subclass can alter its inherited traits. For example, the Collie class might specify that the default furColor for a collie is brown-and-white. The Chihuahua subclass might specify that the bark() method produces a high pitch by default. Subclasses can also add new members. The Chihuahua subclass could add a method called tremble(). So an individual chihuahua instance would use a highpitched bark() from the Chihuahua subclass, which in turn inherited the usual

bark() from Dog. The chihuahua object would also have the tremble() method, but Lassie would not, because she is a Collie, not a Chihuahua. In fact, inheritance is an is-a relationship: Lassie is a Collie. A Collie is a Dog. Thus, Lassie inherits the methods of both Collies and Dogs. Multiple inheritance is inheritance from more than one ancestor class, neither of these ancestors being an ancestor of the other. For example, independent classes could define Dogs and Cats, and a Chimera object could be created from these two which inherits all the (multiple) behavior of cats and dogs. This is not always supported, as it can be hard both to implement and to use well. Abstraction Abstraction is simplifying complex reality by modelling classes appropriate to the problem, and working at the most appropriate level of inheritance for a given aspect of the problem. For example, Lassie the Dog may be treated as a Dog much of the time, a Collie when necessary to access Collie-specific attributes or behaviors, and as an Animal (perhaps the parent class of Dog) when counting Timmy's pets. Abstraction is also achieved through Composition. For example, a class Car would be made up of an Engine, Gearbox, Steering objects, and many more components. To build the Car class, one does not need to know how the different components work internally, but only how to interface with them, i.e., send messages to them, receive messages from them, and perhaps make the different objects composing the class interact with each other. Encapsulation Encapsulation conceals the functional details of a class from objects that send messages to it. For example, the Dog class has a bark() method. The code for the bark() method defines exactly how a bark happens (e.g., by inhale() and then exhale(), at a particular pitch and volume). Timmy, Lassie's friend, however, does not need to know exactly how she barks. Encapsulation is achieved by specifying which classes may use the members of an object. The result is that each object exposes to any class a certain interface those members accessible to that class. The reason for encapsulation is to prevent clients of an interface from depending on those parts of the implementation that are likely to change in future, thereby allowing those changes to be made more easily, that is, without changes to clients. For example, an interface can ensure that puppies can only be added to an object of the class Dog by code in that class. Members are often specified as public, protected or private, determining whether they are available to all classes, sub-classes or only the defining class. Some languages go further: Java uses the default access modifier to restrict access also to classes in the same package, C# and VB.NET reserve some members to classes in the

same assembly using keywords internal (C#) or Friend (VB.NET), and Eiffel and C+ + allow one to specify which classes may access any member. Polymorphism Polymorphism allows the programmer to treat derived class members just like their parent class' members. More precisely, Polymorphism in object-oriented programming is the ability of objects belonging to different data types to respond to method calls of methods of the same name, each one according to an appropriate typespecific behavior. One method, or an operator such as +, -, or *, can be abstractly applied in many different situations. If a Dog is commanded to speak(), this may elicit a bark(). However, if a Pig is commanded to speak(), this may elicit an oink(). They both inherit speak() from Animal, but their derived class methods override the methods of the parent class; this is Overriding Polymorphism. Overloading Polymorphism is the use of one method signature, or one operator such as +, to perform several different functions depending on the implementation. The + operator, for example, may be used to perform integer addition, float addition, list concatenation, or string concatenation. Any two subclasses of Number, such as Integer and Double, are expected to add together properly in an OOP language. The language must therefore overload the concatenation operator, +, to work this way. This helps improve code readability. How this is implemented varies from language to language, but most OOP languages support at least some level of overloading polymorphism. Many OOP languages also support Parametric Polymorphism, where code is written without mention of any specific type and thus can be used transparently with any number of new types. Pointers are an example of a simple polymorphic routine that can be used with many different types of objects.[3] Decoupling Decoupling allows for the separation of object interactions from classes and inheritance into distinct layers of abstraction. A common use of decoupling is to polymorphically decouple the encapsulation, which is the practice of using reusable code to prevent discrete code modules from interacting with each other. The following is an example, using Java, of a class that specifies a rectangle and the methods that can be used to access and manipulate the data. A more detailed description is given in Section 4.5.6. class Shapes { //This illustrates the basic ideas of OOP // Declare three object variables of type Rectangle

Rectangle small, medium, large; // Create a constructor where the initial work is done Shapes ( ) { // Create the three rectangles small = new Rectangle(2, 5); medium = new Rectangle(10, 25); large = new Rectangle(50, 100); //Print out a header System.out.println("The areas of the rectangles are:\n"); //Print the details of the rectangles small.write( ); medium.write( ); large.write( ); }//end of constructor Shapes. //All programs have to have a main method public static void main(String [ ] args) { //Start the programm from its constructor new Shapes ( ); }//end of main method. }//end of class Shapes. class Rectangle { //Declare the variables related to a rectangle int length; int width; int area; //Create a constructor that copies the initial values into the object's variables Rectangle (int w, int l) { width = w; length = l; //Calculate the area area = width * length; }//end of constructor Rectangle

//Create a method to output the details of the rectangle void write ( ) { System.out.println("The area of a rectangle " + width + " by " + length + " is " + area); }//end of write method. }//end of constructor

This example contains two classes. The first is called Shapes and is the main part of the program. It is from here that the program will run. The second class is called Rectangle and it is a template for the description of a rectangle. The class Shapes has a constructor called Shapes, which declares two objects of type Rectangle. This is a declaration and does not assign any values to these objects. In fact, Java simply says that, at this stage, they have null values. Later, the new statement creates actual rectangles. Here small is given a width of 2 and a length of 5, medium is given a width of 10 and a length of 25 and large is given a width of 50 and a length of 100. When a new object is created from a class, the class constructor, which has the same name as the class, is called. The class Rectangle has a constructor that assigns values to width and length and then calculates the area of the rectangle. The class Rectangle also has a method called write( ). This method has to be used to output the details of the rectangles. In the class Shapes, its constructor then prints a heading and the details of the rectangles. The latter is achieved by calling the write method. Remember, small, medium and large are objects of the Rectangle class. This means that, for example, small.write( ) will cause Java to look in the class called Rectangle for a write method and will then use it.

Functional Programming

The lambda calculus provides the model for functional programming. Modern functional languages can be viewed as embellishments to the lambda calculus. [1] Lambda calculus provides a theoretical framework for describing functions and their evaluation. Though it is a mathematical abstraction rather than a programming language, it forms the basis of almost all functional programming languages today.

A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object oriented programming). However, programming languages are often hybrids of several programming paradigms so programmers using "mostly imperative" languages may have utilized some of these concepts.[13]
Higher-order functions

Functions are higher-order when they can take other functions as arguments, and return them as results. Higher-order functions are closely related to first-class functions, in that higherorder functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term that describes programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other firstclass entities like numbers can, including as arguments to other functions and as their return values).

Higher-order functions enable currying, a technique in which a function is applied to its arguments one at a time, with each application returning a new function that accepts the next argument.
Pure functions

Purely functional functions (or expressions) have no memory or I/O side effects, unless we count the computation of the result in itself as a side-effect. This means that pure functions have several useful properties, many of which can be used to optimize the code:

If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with parameters that cause no side-effects, the result is constant with respect to that parameter list (sometimes called referential transparency), i.e. if the pure function is again called with the same parameters, the same result will be returned (this can enable caching optimisations). If there is no data dependency between two pure expressions, then their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).

Recursion

Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, allowing an operation to be performed over and over. Recursion may require maintaining a stack, but tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. The Scheme programming language standard requires implementations to recognize and optimize tail recursion.
Strict versus non-strict evaluation

Functional languages can be categorized by whether they use strict or non-strict evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated.

In brief, strict evaluation always fully evaluates function arguments before invoking the function. Non-strict evaluation is free to do otherwise. To illustrate, consider the following two functions f and g: f(x) := x^2 + x + 1 g(x, y) := x + y Under strict evaluation, we would have to evaluate function arguments first, for example: = = = = f(g(1, 4)) f(1 + 4) f(5) 5^2 + 5 + 1 31

By contrast, non-strict evaluation need not fully evaluate the arguments; in particular it may send the arguments unevaluated to the function, perhaps evaluating them later. For example, one non-strict strategy (call-by-name) might work as follows: f(g(1, 4)) = g(1, 4)^2 + g(1, 4) + 1 = (1 + 4)^2 + (1 + 4) + 1 = 5^2 + 5 + 1

= 31 A key property of strict evaluation is that when an argument expression fails to terminate, the whole expression fails to terminate. With non-strict evaluation, this need not be the case, since argument expressions need not be evaluated at all. Advantages of strict-evaluation Parameters are usually passed around as (simple) atomic units, rather than as (rich) expressions. (For example, the integer 5 can be passed on a register, whereas the expression 1+4 will require several memory locations). This has a direct implementation on standard hardware.

The order of evaluation is quite clear to the programmer: every argument must be evaluated before the function body is invoked.

Advantages of non-strict-evaluation

Lambda calculus provides a stronger theoretic foundation for languages that employ non-strict evaluation.[1] A non-strict evaluator may recognize that a sub-expression does not need to be evaluated. For example, given the definitions

Lazy evaluation

The need for a more efficient form of non-strict evaluation led to the development of lazy evaluation, a type of non-strict evaluation, where the initial evaluation of an argument is shared throughout the evaluation sequence. Consequently an argument (such as g(1, 4) in the above example) is never evaluated more than once. Under lazy evaluation, expressions are sent to subordinate functions as references to expression trees whose value has not yet been computed. When any such expression tree must be expanded, the expression tree "remembers" its result, and thus avoids recomputing the same expression a second time. In the initial example, this would pose as follows: = f(g(1, 4)) = g(1, 4)^2 + g(1, 4) + 1 It is then necessary to evaluate g(1, 4). This can be computed once, yielding:

g(1, 4) = 1 + 4 = 5

Then, since both references to g(1, 4) are references to the same (pure) expression, both "know" that their value is 5. This means that their value is computed only once, even though they are passed symbolically to the function f. = 5^2 + 5 + 1 = 25 + 5 + 1 = 31 Lazy evaluation tends to be used by default in pure functional languages such as Miranda, Clean and Haskell.

You might also like