Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Future Internet Architectures

(Challenges and Solutions)


Mr Farhan Masood Butt
CSE Dept, SEECS NUST Islamabad, Pakistan farhanmb@gmail.com
Abstract The Internet has evolved tremendously during the last few decades and has grown to a phenomenal level. Initially the major requirement was transfer of files but with the passage of time the requirements have increased manifold. Now the Internet is being used as an information resource with different types of contents like streaming video/audio, text, remote controlling devices etc. The present Internet architecture has traversed through many changes as it has absorbed new protocols en route as per the requirements. As the scenario has changed now and new applications are coming up due to which the capacity of Internet to absorb change is reaching its limits. IP address shortage, security, mobility, routing problems are the few issues that need to be addressed. It has been realized by developers that there is a need to redesign the Internet architecture by resolving all the outstanding issues. The designers think that the original design has gone through a lot of patchwork for meeting the new challenges and now instead of doing more patchwork things should be redesigned in a better way. Currently research is being done on this topic by many organizations and different approaches like clean slate, Silo, and European internet is discussed in this paper. This paper gives a background of the current architecture its shortfalls and the various solutions that different organizations are working on.

Mr Muhammed Umar Zafar


CSE Dept, SEECS NUST Islamabad, Pakistan em.u.zed@gmail.com

Figure 1 [2] II. A. CURRENT ARCHITECTURE

Present Approach Understanding the current Internet architecture is very necessary to apprehend why it will not meet the requirements of the future internet. The main aims behind the present architecture are [2]:a. b. c. d. e. f. g. h. Interconnection of existing networks Survivability Support for wide variety of services Heterogeneity of various physical networks be accommodated Distributed management Cost effectiveness Less effort from the host side to connect Fair resource allocation

I.

INTRODUCTION

The basic design of the current Internet architecture was a result of an effort done by Defense Advanced Research Project Agency (DARPA) back in 1970s [1]. Since then it has gone through many major changes and it has served the mankind by changing the world socially into a global village. It has also been a source of inspiration for research in the field of communication as it has given new areas for researchers in the form of layers to work upon. Although the current architecture has served very well but there are many issues that need to be addressed. Namely a few are like energy efficiency, security, mobility, QOS, Real time services support etc. There are many papers in which different suggestions have been made on how the architecture should be. The different approaches on which currently researchers are working all around the world have been highlighted in this paper. Most of the research in the world is being carried out on designing the architecture from scratch keeping in view the present and the future requirements.

The early Internet designers achieved the above targets with the help of well defined design principles. These principles mainly are as under [2]:a. b. c. d. Layering Packet Switching Internetworking at IP layer End to End Argument

B. Layering The famous IP stack is often the terminology used for the layering approach. The Internet architecture is divided into five layers namely application, transport, network, link and physical. Each Layer in the stack offers services to the next higher layer as seen in figure 1. This approach reduces intricacies and simplifies the design by isolating areas according to the different tasks assigned to each layer. The physical layer deals with the coding of the data in an appropriate way so that it can be transferred over the physical medium like wires or optical fiber etc. Link layer is used for the communication between devices which are close to each other like direct link and network made up through switches. The network or the IP layer is used for giving logical addresses to hosts and for taking routing decisions in internetworking environments. This layer facilitates hot to host communication. Transport layer is used in application to application communication. Most commonly used protocols at this layer are TCP and UDP. TCP is reliable protocol which also addresses flow and congestion control to some extent. UDP does not offer reliability. It acts as a simple multiplexer and uses the fire and forgets principle. Application layer uses the famous protocols like http and ftp which are used for supporting network applications. The initial aims a, c and d are adequately handled with the help of this layering approach as different networks can be connected over the common network layer approach. Application layer handles the various types of services that Internet can offer. C. Packet Switching Packet switching was decided to be used by the Internet designers because it uses store and forward paradigm and splits the data into smaller packets which suites the bursty characteristic of data transmissions. Each packet is addressed and then transported through the Internet as an individual message and every packet can traverse through different paths. One message divided into packets and going through different paths contributes to efficient utilization of resources and also satisfies the aim f. Another feature that is facilitated by the use of packet switching is stateless routing at the network layer which as it does not require per connection state. D. Internetworking at IP layer Current Internet uses the IP addressing scheme in the network layer to take routing decisions. These decisions are taken at every router that comes along the way by using very mature routing algorithms. Internet is a connection of several Autonomous systems (AS). Within an AS the routing is done via Interior Gateway Protocols (IGP) such as OSPF and IS-IS. Routing between two ASs is done with Border Gateway Protocol (BGP). All the routers use both the IGP and BGP to educate themselves about the network. This information is very useful for making routing decisions as routing tables are updated and optimized depending on the results obtained when these protocols converge. All the paths in a routing table are calculated in a distributed manner courtesy the protocols being used thus achieving the aim e. The routing mechanism also ensures survivability of a transmission as alternate routes are pre calculated thus achieving aim b.

D.

End to End Argument

End to End argument refers to keeping the intelligence at the end nodes. This mean that instating checks at lower layers is not of much use as it increases cost while doing it at the application level is cheaper and more or less has the same effect. The end to end argument along with packet switching tries to achieve survivability and cost effectiveness (aim b and f). The aims g and h are achieved by patchworks such a DHCP and SNMP respectively. The current architecture is said to apply the best effort approach. Best effort approach means that all the aims are never fully achieved and an effort that takes the network design closest to the full achievement is always welcomed. III. PROBLEMS IN THE CURRENT ARCHITECTURE

The current internet architecture has many issues that need to be addressed in the future designs. As evident from the above brief that the core of the current architecture is the IP layer. The Internet has been around for almost 40 years and there has been a lot of changes at all the layers but the IP layer has not changed much. Only the exhaustion of IPv4 addresses seems to be the biggest problem in this layer. The major concerns of todays architecture are as under [3] :a. b. c. d. e. f. g. Security Mobility Network management QOS Control on receiver side Interlayer communication Shortage of IPv4 addresses

A. Security The current Internet architecture lacks the security levels required for the users so that they can be sure that they will not be hacked or a deadly virus or worm will sweep their hard disks. The main reason for this seems to be the fact that the complete network is a group of cooperating nodes and it is very difficult to achieve security in such cases. Although there have been a lot of work being done on IPsec, DNSsec but there are still loop holes present in the system which can be exploited by malicious users. B. Mobility Popularity of portable computing machines like laptops, PDAs and mobile phones is growing day by day. This is the main reason for the increase in demand of wireless internet devices. The wireless Internet is still not mature and a lot of work is still to be done. Some of the major issues faced by the internet are like whenever there is a change in network location the system does not behave in a stout manner. Then there are slow re connectivity problem. The sessions must start from where the disconnection had stopped it but that is not the case. Another

problem is that a mobile device is not adaptable to its network conditions. The mobile devices do not have the power or authority to adjust to the changing environment to get optimal results. The main problem being faced in the mobile Internet is due to the IP convention being used. The hierarchical naming convention employed in the Internet makes the addresses of a mobile device bound to it. All work up till now to address this problem suggests that either this IP convention be changed or a new IP address be given to a mobile device when its on the move and changing networks. C. Network Management

today a very bad scarcity of IPv4 address has been reached. Thanks to the patchworks like NAT, private addressing and classless inter domain routing that the problem has been delayed. IPv6 has been built and is running on the current architecture but the transition is taking a lot of time because nobody want to take the decision to make a huge change without getting much benefit. IV. THE FUTURE DESIGNS

A major effort throughout the world is being carried out in order to design a new architecture. There two fundamental ways in which a system can be changed [4]:a. b. Incremental Clean Slate

Internet has inherently been working in a very distributed manner which has made it very difficult to be managed. A packet traveling from one host to other is stripped and modified in such a way as decided by the routers its going through it. The sender does not get to know or has any control over the packet once it leaves the ender machine. Resources can only be managed once it is known that where are they being consumed. Similarly in the case of Internet network management there are the network resources cannot be managed until it is known that which application is consuming how much of the network. D. QOS

In the incremental way the system is slowly shifted with a few changes at a time in order to facilitate the new requirements of the users. The Internet up till now has evolved with this same approach and a lot patch work has been done in order to get the desired results. Due to the huge amount of success and investment involved nobody is now willing or unable to further experiment with the present architecture. The clean slate approach on the other hand means starting from scratch. The old design principles and networking constraints be revisited keeping in mind the current and the future requirements. This clean slate approach to the new Internet architecture is what many universities and researchers are working upon. The credit of this clean slate approach goes to the Americans where this idea was conceived. In 2005 National science foundation (NSF) announced the initiative Global Environment for Network Innovation (GENI). GENI is a platform on which new ideas for networking can be implemented and tested. As there were no such platforms for the testing of the new ideas on a global scale, GENI has at least provided a benchmark for all the researchers to work upon and test there innovation. Subsequently with the development of GENI, NSF also launched the program Future InterNet Design (FIND) in 2006 in order to trigger the new architectural designs to be tested on GENI. FIND has now been closed and the future architectures are now included in the Network Science and Engineering (NetSE) program. European Union included their future Internet research in the program called seventh frame work (FP7) which began in 2008. There research is primarily focused on the current architecture and most of their work comprises of services and applications. However there are various programs which are running and research is being done taking a clean slate approach. The names of the main programs are 4WARD, Publish-Subscribe Internet Routing Paradigm (PSIRP) and Trilogy. The Japanese have called there initiative as AKARI which means a small light in Japanese and their motto is a small light in the dark and pointing to the future. It is a government funded program under National Institute of Information and Communication Technology (NICT) and was launched in 2006. The Japanese researchers also call this as the New

Modern Internet user demands QOS for example if he is paying for a 1 Mbps connection then he should get the same speed at all times. The current architecture can never give such guarantees as it has a stateless design and as it is always called the best effort approach. E. Control on receiver side

Considering the path of a packet on the internet it can be seen that initially it is the source that has control over it. Then the network takes over when the packet is travelling on it but the receiver does not have any control on the packet. Receivers should be given some control on the traffic which is flowing to them or passing through them so that they have the freedom to exercise their own policies. F. Inter Layer Communication

Application and transport layer protocols do not get to know that if there are any changes sensed at the lower layers of the network so that they can adopt to the change and utilize it for its convenience or make a conservative approach and avoid packet loss. This facility is not there in the current Internet architecture. G. Shortage of IPv4 addresses

Probably the most talked and less reacted problem of the Internet. Approximately 4 billion IP addresses were thought to be more than enough by the designers of the initial Internet but

Generation Network (NWGN) architecture or the unconstraint design. In the subsequent paragraphs a brief description is given of the new principles being followed and how these different approaches by researchers around the world are trying to address the various problems being faced by the current Internet. A. New Design Primciples As seen from the above paragraphs it is clearly evident that all the research in the world is mostly going on taking the clean slate approach. This means that all over the networking world there is a consensus that new principles must be followed and the constraints of the current design must be kept aside. Two Projects from FIND that are Recursive Network Architecture (RNA) and SILO use different principles. In RNA a single protocol is used that is adjustable to the different layers of the protocol stack. RNA suggests that instead of using different protocols like for Transport and Network layer etc a single metaprotocol be used which should be reconfigurable and the basic operations of the protocols be the same so that duplicate efforts are avoided at other layers. The main objective of RNA is to enhance cross layer communication and fill the gaps between different layers and stacks [5]. The SILO design basically divides functions into smaller services and uses the just in time protocol to build a service stack depending on the application requirements. For e.g. FEC is a service and Reed Solomon coding is a way to implement the service. SILO defines only the services but the way of implementing is not defined. Similarly flow control, segmentation and video compression could be other examples of services. The main aim of SILO is smoother cross layer communication and vibrant formulation of services. SILO claims to be a non structured architecture but due to pre defined nature for applying services it might result in fixed structures[6]. 4WARD architecture is based on two principles. In the first principle the network is divided into strata. The strata are generally understood as network layers. The difference is that these strata are highly configurable and can be changed according to the network requirements. The second principle is that networking nodes that implement the network become netlets. The netlet is further decomposed into functional blocks where each block implements a certain function like encryption or reliability. A network designer can use these netlets to build a network according to the requirements[7][8]. B. Virtualization Network virtualization is termed as building multiple logical networks on the same infrastructure. It is already being used in the current IP architecture. This concept evolved from the facts that new architectures were thought to be tested on the current infrastructure and that the new architecture will consist of many virtual networks using the same media. Two projects of FIND are worth mentioning here Concurrent Architectures are Better than One (CABO) and an architecture for diversified Internet.

CABO defines the roles of two entities one is the infrastructure provider and the other is the virtual network provider. The infrastructure provider will lease a part of the link resources to the virtual network provider. The virtual network provider will then create a virtual network on these link resources and implement end to end services which are required by the customers. It is expected that CABO will trigger a new wave of innovation in the field of protocol design specific to the application requirement which will be running on the virtual networks [9]. The second project of FIND popularly called diversified Internet works on the virtualization principle that is coexistence of multiple networks providing various services on the same infrastructure. The common infrastructure is known as the substrate while the different networks that provide end to end delivery are known as metanetworks. Metarouters implement the metanetwork which are hosted by substrate routers and all of them combine to form the metalink. Substrates are used to provide resources to the metanetworks. In this model the substrate layer can be compared o the IP layer of the current architecture and can be viewed as a new layer that lies between the second and the third layer [10]. C. Routing Routing table size is a big upcoming problem in the current architecture. As the routing tables are increasing at a very higher rate its size has become a serious concerns for the future Internet designers. Another initiative from FIND in this regard is the Greedy routing protocol. The highlights of this protocol is that it does not require regular updates as it chooses the next hop on the basis of the proximity of the destination. This protocol also reduces the size of the routing table as only one hop routing decisions are to be made [11]. Another FIND proposal An Internet architecture for user controlled routes and AKARI use the same principle. In these models the routes are selected by the users. Some level of QOS can be obtained from this design as the user can reroute his traffic from less congested areas of his choice. Another advantage of the design would be to that network providers will improve quality as there will more competition as users will select the best route for themselves[12][4]. 4WARD is working on proposals considering the option of Multipath rather than the conventional single path routing. Two main advantages that can be seen from this protocol could be efficient traffic engineering and robust reaction in case of failure. The 4WARD team is working on a TCP protocol that will use congestion control monitoring on all the paths assigned to a packet[4]. Another proposal from FIND suggests a swarming architecture for data transfer on the Internet. Swarming is a very popular technique for transferring large files on the Internet and is commonly used in bit torrents. The team is trying to find answers to the questions that whether swarming can be used as a universal protocol ? is it scalable? And is it fault tolerant?[13] These above approaches are various methods on which engineers are working to resolve routing issues in the current

Internet. It also seems that by implementing one of these schemes the QOS issues might also be resolved in parallel. This can only be achieved if the routing protocols use the network efficiently. D. Security The current Internet architecture lacks behind in all respects as far as security is concerned. Some patchwork is already being done on it but it still has many loopholes to cover. Under the FP7 model a special program by the name of Security Research Program is doing research on making the future Internet a trustworthy network with protection against cyber threats and also keeping the user privacy in mind. ENCRYPT II is another funded program of FP7. The research in this model is being done on the basics of cryptography in networking. The research is divided into three main labs. The main aim of the first lab called the SYMLAB is the development of secure hash functions which are efficient. SYMLAB is being used to facilitate design and study of symmetric encryption systems. The second lab is called MAYA and its main goal is to enhance design and analysis of asymmetric cryptographic primitives and protocols. MAYA lab is also used to assess the computational complexity of the encryption algorithm under the spot light. The third lab under the ENCRYPT II program is VAMPIRE and its main aim is to find secure and efficient ways for the implementation of the algorithms[14]. Another effort in the field of network security by the EU is the project named INTERSECTION with an aim to provide an integrated security framework and focuses on the attacks that can be done on the security at the points where two networks interconnect[15]. FP7 is also funding a program with the name of SWIFT which primarily focuses on privacy by management of identification codes. Another program WOMBAT is also carrying out research on identification of threats to a network. [16][17] AKARI and 4WARD have not announced or progressed much on the security of the future architecture of the Internet. However there are several FIND projects that can be mentioned here. Design of secure networks a FIND project suggests that authorization be taken for communication process. This authorization is granted by a domain controller in case of a private network. In this design the domain controller has all the network policies required for giving authorization of any information exchange. For the public networks this model suggests changes in APIs where the end to end security is ensured by the authorization from the application itself[18]. Another FIND projects focus on privacy and identifying the culprits. Private Attribution in this research the idea is to trace the miscreant but also maintaining individual privacies. This is achieved by revealing the identity in two stages. Firstly the group to which the target is a member is identified with the help of group signatures. In the second stage for the identification of the member in the group special authorization is required. The second project focuses on attribution and origin. In this case the host machine that has malicious

intentions is identified and control over the type of data traversing the network is achieved[19]. E. Content based network Since its inception the usage of Internet has changed a lot. It started off with the exchange of files and emails but now it is being used for collecting information. User of today wants data authenticity rather than learning about where the data has come from. A DARPA project on Content Centric Networking (CCN) headed by Van Jacobson of Palo Alto Research Center is focused on the future content based Internet. In the current architecture the main focus is the various nodes that is server, routers etc that make the network but in the future architecture the focus will shift to content. In this concept contents can be considered as identities which can be assembled in any form by the user.

Fig 2 [ 20] The above figure illustrates how the content centric Internet of the future will look like. The diagram indicates that the architecture is envisaged as an overlay of multiple clouds [4][20]. The main challenges that CCN face are illustrated in fig 3.

Fig 3 [ 20]

layer is required. The higher layers in the IP sack have very little control over the lower layers but in this model due to enhanced cross layer communication the capabilities of optical layer will be exploited to the maximum[25][26]. In AKARI the research on optical networks has also progressed a lot. While in similar projects of FIND the main aim is to use switched circuit technology but in AKARI the Japanese researchers are focusing on optical packet switching and are in the process of making an all optical router. In NWGN model of AKARI a lightpath is provided from end to end. Currently the multiplexing capacity of optical links is not sufficient enough to generate a large amount of lightpaths but still there is a lot of research going on to increase this multiplexing capacity by using efficient WDM techniques[27]. H. Managing the network Network management is a key element for successful running and implementation of a network. The designers of the future Internet know the value of network management and therefore have given a lot of emphasis on it. In 4WARD the approach is called in network management. The concept behind this is that the management stations are not outside the network like in old architecture but a separate management plane is defined for all the network management tasks within the network[4]. AUTOI one of FP7 projects has the following objective Creation of a management resource overlay with autonomic characteristics for the purposes of easy, fast and guaranteed service delivery."The researchers are implementing this by making a service aware network which has the capability of self knowledge and self service. These functions enable continuous tuning of the network, positive response when a failure occurs and increases the dependability of the network. The AUTOI along with making a separate management plane also creates other planes like knowledge and service enabler[28]. Two FIND projects that have done work in the field of network management in the future Internet architecture are Design for manageability in the next-generation Internet and Complexity-oblivious network management. The first one uses small building blocks like protocols for data sharing, everpresent instrumentation, event detection mechanisms etc to build a separate management plane. The second project is based on the concept that the main flaw in network management of current architecture is that it is a part of the data plane. Therefore it separates the management and data planes thus reducing complexity[29][30][4]. Presently the work being done in this domain of networking for future networks is the use of separate planes for management which are self managed and guided by policies defined by the network administrator. CONCLUSION The designers of the future Internet think that the way to move forward towards a new architecture is by taking the clean slate approach. All the known problems of the current architecture will be addressed in the new structure. Three main

F.

Mobility

Mobility of devices will be one of the main requirements of the future Internet. More and more people are daily shifting from desktop computers to Laptops, PDAs etc and the future Internet users will demand quality service on the move. Unfortunately not much research is being carried out on mobility in the AKARI and FIND programs. The geometric stack project of FIND is one in which position of the mobile device is passed to the network layer for routing purposes[4] [21]. 4WARD project have worked on the mobility side of the Internet. The main theme of their work has been to connect the mobile devices with more than one connecting points. These connecting points can have the same or different interfaces i.e. optical or wired etc [4][22].

G. Optical Networking Researchers have predicted that computational power of electronic devices used in networking will not be able to catch up with the pace at which Internet applications are growing. The solution to this problem is use of optical devices. Currently research is being carried out in this area as to how this turnover will affect the architecture of the Internet. A FIND project called Dynamic Optical Circuit Switching (DOCS) is working on ways to find a configurable optical switched network architecture using photonic integrated circuits (PICs). This effort is focused on Internet backbone and suggests dynamic switching for the humongous amount of data that is being forwarded. The main gains from this endeavor would be more bandwidth to play with, less power, cheaper technology and scalability[23]. Another FIND project working in this area is Future Optical Network Architectures (FONA). The main emphasis of this research is to utilize fully the abundance of bandwidth available in Optical networks by using intelligent and runtime configurable systems. The main areas in which research will be done under this flag are optical flow switching (OFS) and routing algorithms which are aware of any fault development[24]. In 2009 FIND approved two more projects namely GOALI and dynamically programmable optical layer (DPOL). In GOALI the research is being done on optical flow switched core networks. In current optical networks the wavelength is fixed in wavelength division multiplexing (WDM) systems. In GOALI the optical flow switched networks have a configurable wavelength and service provisioning on time scale of 1 sec or less. The other program DPOL is regarding research on forming a programmable physical optical layer. To realize this idea a deep cross layer access into the physical

setups around the world are pursuing this aim of designing a future Internet namely FIND, FP7 and AKARI. A lot of research is going on in many fields of the future architecture but still things are in a very nascent stages and a lot of work is still to be done. Most of the projects are in research phase and it seem that it will take some time before we start getting a ride on the future internet architecture.

[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

REFERENCES
[1] [2] [3] Developing a Next-Generation Internet Architecture by Robert Braden, David Clark, Scott Shenker, and John Wroclawski Internet Clean-Slate Design: What and Why? By Anja Feldmann Deutsche Telekom Laboratories / TU Berlin anja.feldmann@telekom.de Internet 3.0: Ten Problems with Current Internet Architecture and Solutions for the Next Generation Raj Jain, Fellow of IEEE Department of Computer Science and Engineering Washington University in Saint Louis Saint Louis, MO 63130 jain@cse.wustl.edu The clean-slate approach to future Internet design: a survey of research initiatives James Roberts (http://www.nets-find.net/Funded/Rna.php) (http://www.nets-find.net/Funded/Silo.php) Johnsson M et al (2008) Towards a new architecture frameworkthe nth stratum concept methods. MobiMedia Vlker L et al (2009) A node architecture for 1000 future networks. International Workshop on the Network of the Future, Dresden, Germany, June

[4] [5] [6] [7] [8]

[21] [22] [23] [24] [25] [26] [27] [28] [29] [30]

(http://www.nets-find.net/Funded/Cabo.php) (http://www.nets-find.net/Funded/DiversifiedInternet.php) (http://www.nets-find.net/Funded/Greedy.php) (http://www.nets-find.net/Funded/InternetArchitecture.php (http://www.nets-find.net/Funded/Ann. Telecommun. (2009) 64:271276 273Swarming.php) (http://www.ecrypt.eu.org/) (http://www.intersection-project.eu/) (http://www.ist-swift.eu/) (http://www.wombat-project.eu/) (http://www.nets-find.net/Funded/DesigningSecure.php) (http://www.nets-find.net/Funded/EnablingDefense.php Future Content-Centric Internet Architecture Introduction to our position paper T. Zahariadis, Synelixis, J-D. Meunier Thomson, N. Niebert Ericsson, P. Daras CERTH/ITI, D. Williams BT, J. Bouwen , AlcatelLucent (http://www.nets-find.net/Funded/GeometricStack.php (http://www.4ward-project.eu/) (http://www.nets-find.net/Funded/DynamicOptical.php) (http://www.nets-find.net/Funded/Future.php) (http://www.nets-find.net/Funded/Goali.php) (http://www.nets-find.net/Funded/Creating.php (http://akariproject.nict.go.jp/eng/overview.htm) http://istautoi.eu/autoi/ (http://www.nets-find.net/Funded/Manageability.php) (http://www.nets-find.net/Funded/TowardsComplexity.php)

You might also like