Networking BBN

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Data Networking at BBN

Craig Partridge and Steven Blumenthal


BBN has an illustrious history of contributions to data networking, and has made repeated contributions in the development of networking protocols, network operations, router design, and wireless and satellite networks. Because so much of BBNs early Arpanet work has been documented elsewhere, this history focuses on post-Arpanet contributions that led to the Internet.

Many of BBNs contributions to data networking derived from the fact that BBN built the Arpanet and then maintained and operated it for the US government for 20 years. Maintaining and operating a system is an excellent way to discover how that system could have been better. Arpanet was a continuing source of inspiration, frustration, and innovation, both as a stand-alone network and then as the core of the Internet. Indeed, so many innovations occurred at BBN in the Arpanet days that presenting them all in one article is not possible.1 Furthermore, recent books have ably surveyed much of BBNs Arpanet work.2 Accordingly, this article aims to illustrate the overall picture of BBNs data networking contributions by presenting a modest number of key research and operational themes, with a focus on contributions after the early Arpanet days.

important characteristics of the rst IMPs (and the network created from them) including:4 features to minimize the need for on-site assistance and support for cross-network diagnosis, debugging, and new releases;5 considerable facilities for network monitoring and measurement; no constraints put on the data that hosts could exchange over the network; initial distributed algorithms for IMP-to-IMP communications and network routing; much less successful initial algorithms for host-to-IMP and source-IMP-to-destinationIMP communicationsthe former was too limited, and the latter was simply inadequate for congestion control and multiplexing; and a design and implementation with high performance regarding the use of memory and machine cycles, and reliable in terms of the IMPs not crashing because of coding bugs. Although there were some missteps, the initial IMP design and implementation was robust. It provided good support for the host experiments and a powerful mechanism for releasing incremental improvements as they were needed. Arpanet grows After installing the rst four Arpanet nodes in 1969, ARPA expanded the network to 19 nodes. To support this expansion, BBN augmented its IMP development team (the so-called IMP Guys). Two key people had joined the team by 1971: Alex McKenzie and John McQuillan, both initially involved with network operations. Using a new Arpanet node at BBN, they monitored and updated the network of IMPs via the network in Cambridge, Massachusetts. At rst, they simply printed status reports from the West Coast IMPs on a Teletype terminal but soon started to collect and analyze this data in anoth-

Arpanet
Nonetheless, the story has to start with Arpanet. In late December 1968, BBN was awarded a one-year contract to create the Arpanet packet switches (called interface message processors or IMPs) and to deliver a fournode network connecting the University of California, Los Angeles; SRI International; the University of California, Santa Barbara; and the University of Utah. A team led by Frank Heart, and including Ben Barker, Bernie Cosell, Will Crowther, Bob Kahn, Severo Ornstein, and Dave Walden, set out to build the rst IMPs. The rst packet switches Somewhat to the surprise of the people outside BBN, the first IMPs were completed and delivered on time.3 There were debates within ARPA about whether the network should organize itself or be centrally managed from a controlling computer. BBN felt the network should be self-organizing. Pursuing that goal led to

56

IEEE Annals of the History of Computing

1058-6180/06/$20.00 2006 IEEE

er machine. McQuillan wrote the rst signicant network monitoring software, and McKenzie began to lead the operations component of BBNs efforts.6 Beyond the tasks of operating and managing the Arpanet, the IMP Guys had to solve a number of important problems, including: fixing the problems with the initial design for end-to-end message reassembly to deal with congestion problems,7 augmenting the IMP with a terminal handling (TIP) capability,8,9 supporting satellite links between IMPs (see the Network operations section), developing a multiprocessor version of the IMP,10 and replacing the original (and first) distance vector routing algorithm with the rst linkstate routing algorithm.11 Furthermore, once the Arpanet became operational, there was a tremendous effort to develop the host-to-host protocols that ran over the network. ARPA funded a number of groups (mostly at sites having or anticipated to have an IMP) to study and develop protocols. BBN, as the Arpanets operator, and also as the maintainer of the TENEX operating system (one of the major research operating systems of the time12), had a special role in developing or rening several early Arpanet protocols: Network Control Protocol (TCPs predecessor),13 Initial Connection Protocol,14 Telnet,15 File Transfer Protocol (FTP),16 and, of course, the email protocols (see R. Schantzs article, Distributed Computing at BBN, elsewhere in this issue for details on BBNs role in developing email). In July 1975, ARPA declared that Arpanet was an operational network, and transferred management responsibility for Arpanet from ARPA to the Defense Communications Agency. BBN continued to have day-to-day operational responsibility, now under contract to DCA rather than ARPA. ARPA paid DCA a fee for each Arpanet location it sponsored; other parts of the government (for example, the US Army) also paid DCA for their locations. DCA agreed to operate Arpanet until mid-1978, after which it was to be replaced by an equivalent service provided by a military network. The anticipated military network was AUTODIN II (discussed later). Some Arpanet spin-offs Following the original Arpanet IMP development and deployment, BBN was involved in or inuenced a number of derivative networks.

The most notable of these spin-offs was the rst operational packet-switching common carrier, Telenet, which BBN founded in 1972. The business aspects of this effort have been described by Steve Levy.17 BBN sent its IMP software and developers Steve Buttereld and Chris Newport to Telenet in Washington, D.C., where Butterfield converted the software to run on a later model computer. As soon as possible, Telenet redid its packet-switching software on its own hardware. In 1979, BBN arranged the sale of Telenet to GTE and used its share of the substantial return on investment to develop its own networking business. Perhaps more interesting than the Arpanet spin-offs was the strong desire of both researchers and corporations to develop their own, independent, version of packet-switching. Some teams wanted proprietary protocols that they could controlfor example, IBMs creation of Systems Network Architecture (SNA), and Digital Equipment Corp.s (DECs) DECNET. Others simply wanted to explore alternative design choices (the Cyclades project in France went out of its way to learn about Arpanet and then do things differently). AUTODIN II: Arpanet as militarys key network The AUTODIN (Automatic Digital Network) was developed in the 1960s as a Defense Department (DoD)-wide message-switching system. It was operated by Western Union under a contract from DCA. In 1976 DCA announced that it would replace AUTODIN with a new packet-switching network called AUTODIN II. In bidding for the AUTODIN II contract, the BBN team made a tremendous mistake. Rather than writing the proposal specically as DCA requested, BBN management insisted on writing the proposal in a different format, in a way that seemed more coherent to them. Unfortunately, because BBNs proposal format did not match DCAs format, it suffered during DCAs proposal review process. Consequently, BBN lost the contract to a team led by Western Union. Over the next 18 months, Western Union and its team missed some major milestones in the development schedule, and DCA began to worry that Western Union was failing. Pressure began to build for DCA to adopt the alreadyworking Arpanet technology. As one example, Keith Uncapher, director of the University of Southern Californias Information Sciences Institute (USC/ISI), advised ARPA and DCA to accept the Arpanet technology. In time, DCA opened a new bid for the contract. BBN competed against Western Union for this contract. The BBN proposal, a 4-inch-thick document,

JanuaryMarch 2006

57

Data Networking at BBN

addressed all the issues identified by DCA, including network design, security analyses, logistics, reliability, and vulnerability to nuclear attack. The nal recommendation was made by a technical team set up by DCA. Following the teams recommendations, Assistant Secretary of Defense Frank Carlucci stopped the Western Union AUTODIN II contract (resulting in large cancellation payments to Western Union), and on 2 April 1982, told BBN to begin adapting the Arpanet technology to DCAs needs. Among other adaptations, DCA wanted the host interface to be the CCITTs X.25 standard.18 This contract established Arpanet as the Defense Data Network (DDN), which supported the DoDs data communications requirements for the next 10 years.

On to the Internet
Arpanet kicked off a rapid growth in network technology, including satellite networks, local area networks (LANs), and packet radio networks.19 The networking community realized that interconnecting these different types of networks was a serious problem, and they took a number of somewhat parallel steps to deal with it. The end result of these parallel activities created what we know today as the Internet. Interconnecting networks The ARPA community had organized a coming out party for the Arpanet in October 1972 at a conference in Washington, D.C.20 This conference was attended by dozens of Arpanet people and representatives of the National Physical Laboratory network in the UK and the Cyclades network in France (both experimental packet networks). It was also attended by the Canadian, French, and UK telephone companies, all of whom were designing national packet networks. These groups, plus researchers from Japan, Norway, and Sweden, got together during the conference to discuss how to interconnect these and future packet networks so that a host attached to one could communicate with a host on any other. The group called itself the International Network Working Group (INWG) and began an immediate exchange of papers (INWG Notes). INWG Note No. 6, distributed the next month by Donald Davies of NPL stated It was agreed [in October] that networks will probably be different and thus gateways [routers] between networks will be required. Davies went on to set forth questions on routing, flow control, addressing, and so on that must be considered in the design of the constituent networks, gateways, and host protocols. Within a few

months, INWG formally became a subcommittee of the International Federation for Information Processing (IFIP), which gave it standing to participate officially in international standards-making organizations. Meanwhile, 1972 marked the beginning of a new four-year cycle in the standardization activities of the CCITT, a treaty organization of national telephone companies. During 1973 and 1974, both the CCITT and the INWG discussed a number of proposals for interconnections between public packet-switched networks. One significant proposal was described in a paper by Vint Cerf and Bob Kahn proposing a specific Internet Transmission Control Program (TCP), which implemented a bytestream.21 Other submissions to INWG from France and the UK segmented the data stream into letters rather than bytes for error and sequence control, but all these proposals assumed that the network would send only a stream of characters over the network, an activity known as a datagram service. However, CCITT, which had not formally accepted the idea of a datagram service, was discussing the possibility of implementing a virtual-circuits concept within the network. ARPA felt that it did not have time to wait for a CCITT decision, because it needed to interconnect immediately the various networks it was building or designing, including Arpanet, a shared-channel satellite network (Satnet)22 built by BBN, and several networks of mobile packet radios.23 By late 1974, ARPA was working to implement the ideas from the Cerf/Kahn paper, which BBN rst implemented in a 1975 experiment (see the TCP research section) and rst demonstrated with all three ARPA networks in late 1977.24 After a period of TCP experimentation, the Internet researchers realized that creating a single superprotocol across network boundaries was difcult and limiting (for example, packetized voice didnt need a reliable protocol). They also recognized that the problem was much simpler if it was split into two parts: a simpler Transmission Control Protocol (TCP) that managed communication between end points and a new Internet Protocol (IP) that routed datagrams between different networks.25 The TCP specication was formally stabilized in 1979. Meanwhile, in July 1975, representatives from ARPA, Cyclades, and NPL (plus Alex McKenzie of BBN) were working on the problem of establishing standards for the CCITT. They hammered out a compromise proposal based on datagrams,26 agreed to use this as a basis for experiments with interconnecting

58

IEEE Annals of the History of Computing

the three networks, and formally submitted the idea to CCITT. However, there was no enthusiasm for datagrams in CCITT, and in 1976 it adopted the circuit-oriented X.25 standard for data communications. After the CCITT approved the X.25 standard, the international research community then received a shock when ARPA decided that TCP implementation was too far advanced to restart with X.25. US and European networking research diverged as a result. TCP research Although the IMP Guys were initially rather cool to TCP,27 the networking people in BBNs Information Sciences division were TCP enthusiasts from the beginning. The first TCP was implemented in BCPL (the precursor to the C programming language) for the TENEX operating system28 by Ray Tomlinson. While experimenting with this implementation to send les to a printer, Tomlinson found that data from old connections was getting mixed with data from new connections due to overlapping sequence numbers. This discovery led him to develop a theory of managing sequence numbers, in particular, creating a set of rules for when a particular sequence number can safely be reused and when its use is forbidden. His paper remains the standard reference today.29 The initial TCP implementation was extremely slow. So slow that Bob Kahn (by then at DARPA) expressed concern that TCP would never amount to anything. So Bill Plummer of BBN reimplemented TCP in assembly code and put it into the operating system to improve memory performance by swiftly mapping pages. This TCP was used to experiment with a number of TCP features such as Desync-Resync (DSN-RSN) and Rubber End-of-Lines (used for record demarcation) that ultimately did not become part of the TCP standard.30 In 1979, DARPA solicited proposals to replace the aging TENEX operating system with a new research operating system for the DARPA community. DARPA split the work between the Computer Science Research Group at Berkeley, which would implement a paged version of Unix 32/V, and BBN, which was responsible for all the networking code. This version of Unix and TCP ran on a DEC VAX minicomputer. The BBN networking implementation was done largely by Rob Gurwitz, with some help from Jack Haverty. Haverty had already done a TCP implementation for Unix version 6 on a DEC PDP-11. Although they could have started with Havertys TCP, they decided to start afresh, in large part because Havertys version

tied TCP and IP closely together (a vestige of the original single protocol standards). Gurwitzs implementation, the first widely used Unix implementation, had a number of interesting features. It required applications to open special Unix les (for example, /dev/tcp) to create network connections. To manage variable-sized packets in memory, Gurwitz created a new type of memory buffer called an mbuf. And, in an interesting internal feature, the implementation used a state-event matrix of functions: That is, if you received a particular type of packet, and your connection was in a particular state, you indexed a matrix to nd a pointer to the appropriate function.31 The BBN Berkeley Software Distribution TCP was the standard TCP for 4BSD and BSD Unix 4.1. However, in BSD 4.2, the team at University of California, Berkeley, created their own implementation of TCP/IP (using the now-familiar socket interface developed by Bill Joy and Sam Leffler of Berkeley along with Gurwitz). BBN promptly revised its TCP implementation to use the socket interface,32 and for about a year there was a battle to determine whose networking code would take precedence. Although the BBN code won some adherents, and was licensed to several computer vendors, the Berkeley code won the battle. A few effects lingered, however. First, mbufs remained the standard way to manage packet memory until the mid-1990s. Second, and somewhat amusingly, the Berkeley team would sometimes justify bugs in their TCP by pointing out that the original BBN code had the same bug. Third, DARPA continued to fund a vestige of the BBN Unix TCP project into the late 1980s, and during that time BBNers (Karen Lam, Craig Partridge, and David Waitzman) worked with Steve Deering to create the first implementation of IP multicast.33 BBN did TCP implementations on other platforms. Charlie Lynn34 wrote a TCP for the DEC TOPS-20 system. Jack Sax and Winston Edmond wrote an implementation for Hewlett-Packards HP-3000 (DARPA was concerned that all the TCP implementations were on DEC machines and wanted to show it was not DEC-specic). Open Systems Interconnection (OSI) While BBNs attention was focused on the AUTODIN II procurement and ARPAs TCP program, the European computer community was growing increasingly distressed over CCITTs standardization program. Immediately after adopting the X.25 host interface standard, CCITT began an accelerated program of standards development for support of terminals and

JanuaryMarch 2006

59

Data Networking at BBN

applications such as electronic mail. Computer manufacturers doing business in Europe, as well as the computer research community, felt the telephone monopolies must be prevented from controlling the form that computer application software would take. They decided to counter the CCITT by creating a data-communication standardization activity within the International Organization for Standardization (ISO), which had already produced many computer standards. The rst meeting of this activity, known as Open Systems Interconnection (OSI), took place in early 1978. The US government was represented in ISO by the National Bureau of Standards, now the National Institute of Standards and Technology. The DoD urged NBS to ensure that any standards developed in the OSI project provided the same functionality as TCP (and IP), so that eventually this functionality would be provided by computer manufacturers as part of their bundled software rather than needing to be developed specially with DoD funding. To achieve this goal, NBS awarded BBN a contract to provide technical assistance both at and between ISO meetings; this assistance took the form of drafting position papers and detailed protocol specications that reconciled the NBS interests with the requirements of other ISO members. Alex McKenzie led this effort for BBN. One consequence of this work was that, for a time, BBN had a weekly lunch table meeting where people on the NBS contract practiced their French for use at standards meetings. Perhaps not surprisingly, the TCP community considered the OSI project a colossal waste of time and the virtual circuits of X.25 a major technical error. Thus there were many debates in the halls of BBN among the groups implementing X.25 interfaces for DCA, routers and TCP/IP host software for ARPA, and OSI proposals for NBS. In spite of these internal debates (or perhaps because of them), the group supporting NBS achieved some notable results. Debbie Deutsch, Bob Resnick, and John Vittal developed a considerable portion of Abstract Syntax Notation 1 (ASN.1), a protocol used by ISO, CCITT, and the Internet community to describe the content and encoding of application data.35 Ross Callon almost single-handedly convinced ISO to include a connectionless network facility corresponding to IP in the OSI standards and wrote most of the specication. John Burruss and Tom Blumer developed a method of formally describing a protocol state machine in terms easily understood by human readers, yet directly compiled and executed, thus eliminating the ambiguities possible in a

natural-language protocol description. Blumer implemented an execution environment to support the protocol state machine, and Burruss wrote the formal description of a protocol providing the functionality of TCP that ISO included in the OSI standards.

IP, routers, and routing protocols


Central to the IP concept is the router, a device that takes datagrams from one network and places them on another network. Its called a router because its job is to move datagrams between networks such that the datagrams proceed along the correct routes to their destinations. Equally important is the concept of routing protocols, so that the routers learn from each other how to move a datagram from network to network, from its source to its destination. By late 1980, the DoD had adopted TCP/IP and the Arpanets terminal support (Telnet) and File Transfer (FTP) Protocols as DoD standards. In 1981, planning began for all Arpanet hosts to transition to TCP/IP. The official transition completion date was to be 1 January 1983; in fact, the transition was not completed for several more months. The conversion to TCP/IP was mandated to make it possible to split Arpanet into multiple networks without disrupting host computers ability to communicate with one another regardless of which network they were assigned to. The networks were to be connected by mail bridges. These devices were customized routers that could filter out undesirable trafficin essence, the first firewalls. The mail bridges were built and operated by BBN, making BBN the first Internet router vendor and putting BBN in the center of the early development of IP routing protocols. Routers Probably BBNs earliest published thinking relating to interconnecting networks resulted from work with satellite networks (described in the Radio and wireless section).36 In 1975, Virginia Strazisar joined BBN and was tasked with implementing an IP router (at that time, called a gateway) on a PDP-11. This was a BCPL implementation on the ELF operating system and remembered fondly as being a wonderful prototypeit ran well, albeit slowly. By late 1976, three routers were up and running: one at BBN connecting an Arpanet clone that BBN used as its internal LAN with the Arpanet itself, one at SRI between the ARPA Packet Radio Network and the Arpanet, and one at University College London, connecting the Atlantic Satellite Network and Arpanet.37

60

IEEE Annals of the History of Computing

In 1981, in anticipation of Arpanets TCP/IP transition, BBNs router work was given to a new team, led by Bob Hinden, charged with developing a router system that BBN could operate for DoD. Mike Brescia and Alan Sheltzer reimplemented the router in assembly language under the MOS operating system for both the PDP-11 and the DEC LSI-11 processors.38 The LSI-11 rapidly became the preferred platform and was widely used into the mid-1980s.39 Around 1983, the BBN router team began to grapple with the deciencies of shared-bus, single-processor hardware as a base for router implementation. In a device whose job is largely moving data between external interfaces, the BBN team felt the most efcient architecture would put processing near each interface, and allow interfaces to talk to each other directly, rather than having to go through a processor that managed a shared bus. This thinking was about a decade ahead of its time and market needs. However, BBN was in a position to make the multiprocessor router a reality. Another team in the company was completing an innovative multiprocessor computer called the Butterfly, which interconnected processors and peripherals through a time-slotted banyan switch. So BBN decided to try to build a next-generation router on the Butterfly platform.40 The team, led by Hinden, included Eric Rosen, Brescia, Sheltzer, and Linda Seamonson. As a research activity, the Buttery router was an important innovation. The software, in C, was the rst demonstration that a high-performance router could be implemented in a higher-level language. The router team also learned a number of painful lessons, most notably that the Buttery was rich in processing power but weak in bandwidth between peripherals and that this balance was exactly the reverse of what a router would want. Indeed, performance issues led Rosen and Seamonson to invent an early version of label switching.41 Although the Butterfly gateway (as the router was called) was the fastest router available, its performance/price ratio was poor. Unfortunately, the Buttery gateway became BBNs de facto router product. It was a mistake. The router was expensive, slow to reboot,42 and although it eventually performed well, it was hard to maintain. BBN managed to sell around 50 of them, largely to government clients who needed the fastest possible router. But when the Internet was opened to general use and the router market suddenly blossomed, BBN was caught flat-footed. The Butterfly gateway, although a more mature product, was simply not price competitive.

Figure 1. Lightstream asynchronous transfer mode (ATM) switch. (Photo courtesy of BBN Technologies.)

Despite internal resistance (one vice president asked why he would want to build a $20,000 router when he was selling IMPs for $80,000-plus), a team led by Hinden and Steve Blumenthal did build a price-competitive router called the T/20 that placed the Buttery gateway code on a single-processor card, with daughter cards for each interface. The T/20 was used extensively to support packet videoconferencing and distributed real-time simulation on the Defense Simulation Internet. It was also widely deployed in the US Armys Mobile Subscriber Equipment network. However, by the time the T/20 router reached the commercial market, Cisco Systems had already won the market. Although the Butterfly gateway swiftly faded, its influence lingered. A team that included part of the Buttery team43 designed and built an early high-end asynchronous transfer mode (ATM) switch (see Figure 1). BBN created a successful start-up, BBN Lightstream Corp., to market the switch. Lightstream was funded by BBN and Ungermann-Bass and was eventually sold to Cisco. A little later, between 1992 and 1996, a BBN team led by Craig Partridge, Josh Seeger, Walter Milliken, and Phil Carvey (Carvey and Milliken were members of the Buttery team) designed and built a prototype of the worlds first 40gigabit-per-second router. The router design reflected BBNs painful experience with the Buttery gateway. It included a switch designed specifically to move IP datagrams from arbitrary input interfaces to arbitrary output interfaces. (One of the lessons of the Butterfly experience was that IP traffic tended to have bursts of traffic to a destination, which could

JanuaryMarch 2006

61

Data Networking at BBN

overload switches that assumed balanced trafc.) Variants of this router architecture are now standard, and BBNs journal paper on the router44 is required reading at many corporations. BBN remains a center of expertise in the design and implementation of high-end router and router-like devices (such as packet switches, rewalls, and encryptors) today. Routing protocols The original Arpanet procurement, asking the contractor to design a routing algorithm, suggested an example algorithm based on complete knowledge of the network conguration at a central control facility and updates from the central facility to the individual packet switches.45 BBN viewed central control as inconsistent with the Arpanets robustness goals and instead designed and implemented a dynamic system that set the stage for the worldwide distributed routing system of todays Internet. Bob Kahn suggested the structure for distributed routing,46 and Will Crowther devised and implemented a detailed set of algorithms that47 adapted to changing installations of switching nodes and internode communication links with minimal conguration information in each node and no centralized control; discovered and adapted to temporary node and link ups and downs; and routed data traffic along the path of least delay. The implementation included link alive/ dead logic, internode packet retransmission logic, and a distributed, asynchronous, adaptive routing calculation. These features were a major break with the more or less xed routing under central control and inadequate internode data acknowledgment schemes that were typical up to 1969. The implementation included the discovery of the distributed asynchronous real-time algorithm now widely known as Arpanet distance vector routing.48 This initial routing could not adapt accurately or fast enough as Arpanet (later, Internet) grew in complexity and size. Nonetheless, it provided an initial dynamic, distributed implementation that supported the quasi-operational Arpanet in its early years and provided a testbed for developing improved algorithms.49 From 1973 through 1975, John McQuillan tuned the initial Arpanet routing algorithm and implementation and began planning an improved implementation.50 From 1976 through 1979, led rst by McQuillan and later by Eric Rosen, a small team designed, experi-

mented with, and nally implemented operationally a new Arpanet routing algorithm11 now known widely as Arpanet link state routing or shortest path rst (SPF).51 The essence of this implementation, according to an email from McQuillan to Dave Walden on 22 Apr. 2003, was to build a routing database of topology and trafc for the whole netand to build a complete routing treeat every node. Much work and careful thinking went into making the distributed routing databases accurate and coherent, including development of an improved means for measuring network delay, and using flooding to disseminate the information reliably and efficiently. This implementation included a real-time distributed implementation of Dijkstras algorithm. When it came time to implement IP routers, BBN built on its prior work. The first routers used a distance vector protocol called the Gateway-to-Gateway Protocol,38 patterned after Crowthers original Arpanet routing protocol. Later, as the Internet grew, GGP had the same scaling problems as its predecessor, so the Buttery gateways implemented an SPF protocol patterned after McQuillans. However, the BBN router team also realized that one routing protocol was not enough. A hierarchy of routing protocols was neededin particular, there needed to be some way to put boundaries between different pieces of the Internet, so that errors and routing problems in one part of the network didnt spill over (or, perhaps better said, spilled less) into other parts of the network. To share routing information across these operational boundaries required a new type of routing architecture and protocol. Eric Rosen developed an architecture dividing the Internet into a set of autonomous systemseach of which was a relatively homogeneous set of routers and routing protocolsand the Exterior Gateway Protocol to communicate basic routing information between the autonomous systems.52 The work of Crowther, McQuillan, and Rosen still denes how we do routing today. The major routing protocols (Routing Information Protocol, Interior Gateway Routing Protocol, Open Shortest Path First, Intermediate Systemto-Intermediate System, and Border Gateway Protocol) are all derivatives of the original Arpanet protocols and EGP.

Network operations
Typically, once BBN built an innovative network such as Arpanet or the wideband network, BBN was asked to operate the network. BBN rapidly developed a strong competence in network operations. It had a 24/7 network

62

IEEE Annals of the History of Computing

operations center (NOC) for the Arpanet, supplemented by on-call technical staff. The Arpanet NOC was a major resource and often called upon to help operate other networks, such as Satnet and the Wideband Network. Although the operations staff was professional, the community was small enough to tolerate (indeed, encourage) a bit of flair and friendly one-upmanship. Mike Brescia always seemed to know what was happening in any router on the Internet.53 Dennis Rockwell specialized in debugging knotty network problems and always seemed to know (by heart) the phone number of the relevant techie whose system was causing the problem. The result was a fun-loving yet earnest group of people who ran many of the worlds biggest data networks into the 21st century. CSnet By the late 1970s, US computer science departments had realized that departments on the Arpanet could collaborate and share research ideas at an entirely different speed than departments not on the net, and the non-networked departments were in danger of being left behind. CSnet was the proposed solution to this communications divide. CSnet, created in 1981 by the US National Science Foundation (in cooperation with DARPA), provided email and TCP/IP access to Arpanet to computer science research institutions that did not qualify to be attached to the Arpanet. Building on its success operating the Arpanet, BBN won the contract to operate CSnet in 1983, taking over from a team of universities that had gotten the network started.54 CSnet was the first real Internet service provider. Its contract with NSF required CSnet to be self-sufficient after a five-year start-up period, and so from its inception, CSnet was run as a not-for-profit business and charged fees from the sites it connected. The CSnet team was led by Dick Edmiston. Laura Breeden headed marketing, user services, and accounting, and Dan Long headed technical services. Breedens team (typically just three people) issued a quarterly newsletter, handled dozens of messages a day from academic users trying to gure out how to send email to their colleagues,55 and worked hard to recruit new sites. Longs team ran the network, providing 24/7 coverage. Beyond maintaining equipment and dealing with network outages and balky email queues, this team also wrote or maintained much of the CSnet subscribers networking software. The team also did work for the Internet community as a whole. For several years, Long maintained the MMDF-II (for

Figure 2. NEARnet Network Operations Center. (Photo courtesy of BBN Technologies.)

Multi-channel Memo Distribution Facility) email software. Craig Partridge gured out how to route email using domain names.56 Leo Lanzillo wrote the rst dial-up IP system.57 CSnet was a tremendous success. By 1985, its Arpanet IMP was one of three busiest in the network. Thanks to tight scal management by Edmiston and Breeden, CSnet was protable by 1986. Soon thereafter, CSnet hired John Rugo as a full-time marketing director, and he began signing up new CSnet members at a prodigious rate.58 CSnets success encouraged NSF to create a similar program in 1987 where NSF regional networks were given start-up funding to establish IP networks throughout the US. As part of the NSFnet program, BBN was tasked to create the first interNIC, called the NSF Network Service Center (NNSC). For the first few years of the NSF network program, the NNSC staff handled the cross-network coordination of information for users and network operators. However, the rise of NSF-funded regional networks led to the end of CSnet. In particular, CSnets email-only customer base swiftly faded away (why buy email through CSnet when you could get full IP access from a local ISP?). A few years later, CSnet merged with Bitnet. BBN as an ISP As CSnet began to falter, the CSnet team moved on to a new activity: operating the NSFfunded New England Academic Regional Network (NEARnet), a regional network (see Figure 2) on behalf of a university consortium. The NEARnet team was led by Rugo (who reported to Edmiston) and rapidly repeated CSnets success. Unlike many NSF regional networks, NEARnet quickly became protable.59 In

JanuaryMarch 2006

63

Data Networking at BBN

the early 1990s, the acceptable use policy (AUP) of the NSFnet was changed to allow commercial use of the Internet. The universities that had been running the NSFnet regional networks began to get out of the network operations business, and BBN acquired NEARnet along with other regional networks in Northern California and the southeastern US. In 1995, BBN began to build out a national Internet backbone, renamed the service BBN Planet, and became the countrys second largest ISP. Operationally, BBN Planet continued in the BBN tradition.60 Independent measurement services routinely rated its backbone performance as the best (lowest latency and packet loss).61 Planet rolled out early (and popular) managed rewall and Web-hosting services. It also managed a considerable portion of America Onlines dial-in links for many years. Internet usage went through phenomenal growth in the late 1990s. BBN Planet was more than doubling in size and quadrupling in trafc in the mid-1990s.62 As a result, ISPs had to invest heavily and swiftly in new infrastructure simply to avoid losing market share. The need for large amounts of capital and to keep up with other ISPs caused BBN to be sold to GTE in 1997. GTE married the BBN Planet ISP with a large project to build a global ber network to create GTE Internetworking. In 2000, when GTE merged with Bell Atlantic to create Verizon, the Internet business and fiber network business was not permitted to operate in the former Bell Atlantic telephone company territory, and these businesses were spun out in a large initial public offering as Genuity. From 2000 to 2002, Genuity continued to grow, eventually reaching $1.2 billion in revenue. However, the Internet boom slowed, and Genuity struggled to get its costs in line with revenues. It was eventually put through bankruptcy and sold to Level(3) in 2003.63

Radio and wireless


Early in the development of the Arpanet, point-to-point links on geosynchronous orbiting satellites were used as inter-IMP circuits to reach overseas locations such as Hawaii and Europe. These links were treated like ordinary network circuits, except they had long delays, typically 250 milliseconds each way, and thus required extra packet buffering, causing problems for the original Arpanet routing algorithm. However, ARPA soon began to investigate ways to use wireless technologies, both satellite and terrestrial, as the basis for building networks.

Early packet satellite network R&D and Satnet The earliest ARPA work was instigated by Larry Roberts and inspired by the ground-based radio Aloha system that Norm Abramson and his colleagues had developed at the University of Hawaii. ARPA encouraged researchers to take advantage of the satellite channels broadcast nature to merge uplink trafc from many nodes, use the satellite channel to achieve statistical multiplexing, and deliver the combined aggregate data stream to all sites simultaneously. Eventually, the ARPA researchers developed a scheme called Priority Oriented Demand Assignment.64 In 1975, ARPA initiated the project to build a working PODA network named the Atlantic Packet Satellite Network, which was later shortened to Satnet. The purpose of Satnet was to extend the Internet to Europe and support experiments in the use of broadcast satellite channels for packet switching and joint North Atlantic Treaty Organization experiments in distributed command and control. The original Satnet used a 64-kilobit-per-second (Kbps) channel on Intelsats Atlantic Ocean Region and included three main sites located at Intelsat country earth terminals at Etam, West Virginia, in the US; Tanum, Sweden; and Goonhilly Downs, UK. Satnet was operated as a separate network with early IP routers connecting to Arpanet in the US and local in-country networks at research institutions in Europe. The point-to-point transatlantic satellite links of the Arpanet were disconnected and Satnet provided the emerging Internets primary network connection to Europe. In the early 1980s, two new sites were established at Raisting, Germany, and Fucino, Italy. As fiber optics became more prevalent and cheaper at the end of the 1980s, Satnet was retired and replaced by transatlantic undersea ber connections. BBN, selected to implement the Satnet IMP (SIMP) as a modification to the standard Arpanet IMP, also had overall responsibility for Satnet operations. The network was monitored and controlled by the Arpanet NOC at BBN using some of the same tools used for the Arpanet, or tools that had been modified for the Satnet application. Wideband packet satellite network; packet voice In the mid-1970s, Bob Kahn (having left BBN for ARPA) began to think about how packet switching could be extended to other types of communications such as voice communications. He commissioned a study by Howard Frank and Israel Gitman of the Network Analysis Corp. to examine the economic costs and relative efciencies of carrying voice by cir-

64

IEEE Annals of the History of Computing

cuit- and packet-switching techniques. This study concluded that packet switching had the potential to be substantially more efcient. Two major problems had to be solved, however, to make packet voice a reality. First, it was necessary to nd a way to digitize voice into packets. John Makhoul and the BBN Speech Group participated in ARPA-sponsored research in voice compression techniques leading to the development of linear predictive coding, which has become one of the standard ways for voice to be compressed and transmitted as packets. Using LPC, the BBN group built an experimental system that compressed the speech from 64 Kb/s down to 300 bits/s. Today typical LPC voice coder/decoders (vocoders) run at 810 Kb/s. Second, as early experiments in sending voice packets over the Arpanet revealed, voice transmission required a pure datagram service (which Arpanets reliable link layer did not provide) with some support for quality of service (QoS). A voice packet network would also benet from high-speed circuits (to reduce delays) and the ability to deliver packets to multiple destinations simultaneously. BBNs Dick Binder worked with Estil Hoversten and Irwin Jacobs of Linkabit Corp., along with Kahn and Vint Cerf from ARPA, to conceive of the Wideband Packet Satellite Network that would use a 3-megabit-per-second (Mbps) broadcast satellite channel to connect multiple sites in the US and support broadcast and multicast delivery for voice conferencing. The Wideband Net brought together the packet speech and packet satellite networking work by creating a stream service for packet voice traffic. The stream service permitted sites to reserve periodic time slots in each frame on the satellite channel to carry the voice packets. The rest of the frame could be used for more bursty data traffic. The first four sites were Massachusetts Institute of Technologys Lincoln Laboratory in Lexington, Massachusetts; the Defense Communications Engineering Center (DCEC) in Reston, Virginia; USC/ISI in Marina del Rey, California; and the Stanford Research Institute (SRI) in Menlo Park, California. Binder led the BBN team that built the original Wideband Net packet switch. Gil Falk ran the project when Binder left BBN. The switch, built on the BBN Pluribus multiprocessor, was called the Pluribus Satellite IMP or PSAT for short. The Pluribus was chosen because, having roughly a half-dozen Lockheed System-User-Engineered (SUE) minicomputer processors and a high-speed satellite interface, it possessed the processing power to run the PODA algorithms and accommodate the 3-Mbps channel. The Pluribus pre-

sented a difcult programming environment. During the projects early stages, many people worked on the development, including John Robinson, Tony Lake, Jane Barnett, Dick Koolish, Steve Groff, Walter Milliken, Marian Nodine, and Steve Blumenthal. Burnout on the programming team was a problem. The PSAT software was buggy and could not be made to run reliably for any length of time. The hardwares several wirewrapped boards also had some long-term stability problems, and faults were difficult to isolate by specic hardware or software causes. Blumenthal eventually took on an operations role and began figuring out how to make the PSAT and the entire system more robust overall. Eventually, Blumenthal became Wideband Net project manager.65 In the late 1970s, as a part of the Wideband Net effort, BBN began developing a high-performance packet voice multiplexer, the Voice Funnel, which was hosted on the Butterfly computer. Randy Rettberg and Blumenthal convinced ARPA that it made sense to port the PSAT to the Buttery to create a BSAT. Winston Edmond, who had joined the Wideband Net, headed the BSATs architectural design. Edmond created an elegant design that took advantage of situations where the PODA satellite channel scheduling processes could be parallelized for increased performance. The BSAT was begun in 1982, and Milliken and Nodine worked with Edmond to complete it by 1984.66 The BSAT was a big success and allowed the BBN team to refocus its efforts on meeting ARPAs functional and performance goals and operate reliably.67 Users were now reliably supported in their video- and packet voice conferencing and high-speed networking research. An additional six sites were added to the network. User organizations such as MIT Lincoln Laboratory, MIT Laboratory for Computer Science (LCS), ISI, and SRI began to use the Wideband Net for multisite video- and packet voice conferencing. The Wideband Net was also used to provide real-time networking between SIMnet sites; see the Distributed Computing at BBN article by R. Schantz elsewhere in this issue. By the mid-1980s, the Wideband Nets stream service support of resource allocation and QoS within the network led to the development of similar capabilities at the Internet level. Claudio Topolcic and Lou Berger of BBN led the development of the ST-2 protocol68 to support real-time packet voice and video communication over the Internet. These protocols were being developed by the newly formed Internet standards body called the Internet Engineering Task Force (IETF). The ST-2 proto-

JanuaryMarch 2006

65

Data Networking at BBN

col was a connection-oriented protocol, which maintained state within the network, and other connectionless schemes such as the Resource Reservation Protocol, Real-Time Protocol, and Differentiated Services were developed by the IETF as alternatives. BBN people contributed to all these IETF efforts. These protocols form the basis of Voice over IP services today. As T1 phone line service became cheaper and more widely available in the late 1980s, high-speed terrestrial networks could be built that did not have the 250-ms satellite channel latency. Winston Edmond adapted the BSAT software to work over a shared cross-country bus made up of multiple parallel T1 circuits. This network became known as the Terrestrial Wideband Network (TWBnet), and the BSATs became Wideband Packet Switches (WPSs). The TWBnet supported a real-time stream service along with a bursty datagram service. In the early 1990s, BBN extended this network globally from Germany to South Korea as the Defense Simulation Internet, which supported real-time SIMnet and other war gaming exercises. Gigabit satellite networking with NASAs ACTS In 1992, a BBN team led by Marcos Bergamo was selected by DARPA and NASA to design, develop, deploy, and operate the worlds first Gigabit Satellite Network (using NASAs Kaband Advanced Communications Technology SatelliteACTS) to demonstrate the practical feasibility of integrating satellite and terrestrial Internet and ATM services for distributed supercomputing, remote visualization, and telemedicine applications. The initial architecture, performance requirements, challenges, and development recommendations for the network were rst dened in a study BBN prepared for DARPA during the early 1990s.69 A network of ve large, transportable Kaband earth stationsbuilt around the gigahertzwide multiple-beam-hopping and onboard transponder-switching capabilities of ACTS was completed in a tight two-year schedule.70 Bergamo and his team decided on an approach that integrated Satellite-Switched Time Division Multiple Access (SS-TDMA) and OC-3/OC-12 Sonet add/drop multiplexing at the earth stations.71 This approach involved many challenges: develop a nowhere to be found 120-watt traveling wave tube amplifier and a 3.4meter Ka-band antenna; design a near-gigabit-rate modem capable of operating over gigahertz-wide transpon-

ders with then-unfamiliar noise-saturated characteristics; design and develop a digital terminal capable of multiplexing OC-3 (155 Mbps) and concatenated OC-12 (622 Mbps) Sonet data into satellite switched TDMA bursts; invent a way to synchronize and distribute Sonet data gathered from diverse locations (geographically distributed Sonet islands in the continental US and Hawaii); and engineer the critical issue of initially acquiring and maintaining the earth stations synchronized with the microwave and beam-switching on board the ACTS satellite. The network was deployed to ve different US sites in 19941995 and operated until April 2000 when the ACTS satellite was deactivated. During its lifetime, the ACTS Gigabit Satellite Network was used for rst-time over-the-satellite experiments including a distributed supercomputing Lake Erie weather simulation, remote operation and visualization of the Keck telescope in Hawaii by NASA Goddard astronomers, and as multiple integrated groundsatellite Internet/ATM testbeds. In 1997, key BBN developers of the Gigabit Satellite Network were inducted as satellite innovators to the US Space Technology Hall of Fame, and for his work, Bergamo was personally recognized with the 2005 IEEE Judith Resnik Award. Packet radio, wireless, and tactical military networks BBN has been a major contributor to terrestrial wireless networking, particularly in mobile ad hoc networksalso called packet radio networks or multihop wireless networks. An ad hoc network is a (possibly mobile) collection of wireless communication devices that communicate without xed infrastructure and have no predetermined organization of available links. The lack of xed infrastructure, rapid changes in connectivity and link characteristics, and the need to self-organize pose challenges, which make ad hoc networking signicantly more difcult than cellular networking.72 In 1973, ARPA started a theoretical and experimental packet radio program. The initial objective was to develop a geographically distributed network consisting of an array of packet radios managed by one or more minicomputer-based stations, and to experimentally evaluate the systems performance. The rst packet radios were delivered to the San Francisco Bay area in mid-1975 for initial testing, and a quasi-operational network capability was established for the rst time in September

66

IEEE Annals of the History of Computing

1976.73 The project was multi-institutional; at BBN, Jerry Burchel and his team74 implemented the gateway to connect the packet radio network to the Arpanet and Satnet and developed the centralized (later distributed) routing and management of packet radio stations.75 In the mid-1980s, BBN played a key role in the next phase of ARPA packet radio thrust, the development of the Survivable Adaptive Radio Networks (Suran)76 program, which was the first comprehensive prototype system for battlefield networking of elements in an infrastructureless, hostile environment. Under subcontract to GTE Government Systems, BBN designed and built a packetswitched overlay network for the US Armys Mobile Subscriber Equipment (MSE) tactical radio communications system using ruggedized versions of the BBN C/3 packet switch and the T/20 IP router. This was a huge contract for BBN during the late 1980s and early 1990s. Thousands of C/3Rs and hundreds of T/20 IP routers were deployed using Army tactical radio links to provide field data services. The MSE contract gave BBN the opportunity to further rene its program management skills for large government systems and also gave it credibility in the tactical communications arena. In the early 1990s, BBN played a crucial role in two programsthe US Armys Near Term Digital Radio (NTDR), for which BBN developed scalable, adaptive networking,77 and DARPA Global Mobile Information Systems (GloMo), as part of which BBN completed two large projectsthe Mobile Multimedia Wireless Network (MMWN),78 and Density- and Asymmetryadaptive Wireless Network (DAWN). The previously mentioned projects laid the foundation for BBNs emergence as one of the leaders in ad hoc wireless networking. Today, led by Chip Elliott, Ram Ramanathan, and Jason Redi, BBN is actively involved in several DoD programs for the next generation of battleeld networking. BBN has also been a thought leader in ad hoc networking research, often opening up new research avenues that the community is now hotly pursuing. In particular, BBN has achieved recognition as the leader in the use of directional antennas for ad hoc networking,79,80 the development of energy-conserving cross-layer protocols,81 and the concepts of topology control82-84 and scalable routing.85

What has BBNs role been?


This article is only an interim history. BBN continues to be a vital source of data networking ideas, especially in directional wireless networks and quantum key distribution networks.

Still, we conclude with a brief attempt to place BBNs contributions of the 1970s and 1980s in context. The debatable question is why didnt BBN capitalize more successfully on its networking leadership in the 1970s and 1980s? In the mid1980s, BBN was the leading manufacturer of data communications switches and routers and the leading Internet ISP. By the early 1990s, BBN was no longer in the switch and router businesses, and was fighting for market share as a leading ISP. Why? We suggest that the primary reason is a mismatch between BBNs core business model and the style of business required to succeed in the router and switch business. BBNs specialty is contract researchcreating new technologies never seen (in some cases, never envisioned) before. Thats a labor-intensive and intellectually demanding business. Furthermore, most funding sources for research fund the research by paying for a researchers time at an hourly rate. Accordingly, BBNs research core makes money by recruiting highly talented people and then nding customers with new problems who can keep those people busy. In contrast, selling routers or switches is a process of creating standardized (or semistandardized) products that can be sold repeatedlyas a commodity. A product sold in volume does not require substantial additional technical effort. In this light, BBNs experience of the 1980s and early 1990s makes sense. During the 1980s, data networks were custom products, and BBN built a business of customizing its routers and switches to individual customers, then maintaining the customers networks. These laborintensive activities fit reasonably well with BBNs focus on keeping employees busy doing work for customers. When data communications became a commodity, BBNs business focus no longer t the market, except in the ISP business, where customers paid BBN to operate a network (again, a people-intensive business). Although BBN had indifferent success at capitalizing on its ideas, BBN undoubtedly succeeded at transferring its key ideas into the marketplace. Weve noted repeatedly how BBNs ideas have become centerpieces of todays networking. BBN has also been a source of networking talent for the field. Although many individuals mentioned in this article are still at BBN, others eventually left to work elsewhere in data communications. Indeed, it is hard to find an important data communications company that does not, somewhere among its key staff, have a few ex-BBNers.

JanuaryMarch 2006

67

Data Networking at BBN

Acknowledgments
This article has benefited tremendously from the contributions of current and former BBNers: Mike Brescia, Bob Bressler, Chip Elliott, Rob Gurwitz, Frank Heart, Bob Hinden, Steve Kent, Alex McKenzie, Bill Plummer, Ram Ramanathan, Eric Rosen, Jason Redi, Ray Tomlinson, Ginny (Strazisar) Travers, David Waitzman, Dave Walden, and Jil Westcott. Dave Walden was deeply involved in pulling together the first draft of this article, as was Alex McKenzie. The authors take full responsibility for the content and, especially, for any errors of fact.
11.

12.

13.

References and notes


1. A 1994 attempt to provide a complete bibliography of published papers from BBN in the networking area resulted in a 39-page document (Arpanet and Internet: A Bibliography of BBN Papers, compiled by A. McKenzie, D. Walden, and F. Heart, BBN, 1994). 2. K. Hafner and M. Lyons, Where Wizards Stay Up Late, Simon and Schuster, 1996; P. Salus, Casting the Internet, Addison-Wesley, 1995; and J. Abbate, Inventing the Internet, MIT Press, 1999. 3. S. Crocker, then a graduate student at UCLA, has described UCLA as in a near panic to prepare when they discovered the interface message processor (IMP) was going to be delivered on time; see his introductory essay, J.K. Reynolds and J. Postel, The Request for Comments Reference Guide, RFC 1000, 1 Aug. 1987; ftp://ftp.rfc-editor.org/ in-notes/pdfrfc/rfc1000.txt.pdf. 4. F.E. Heart et al., The Interface Message Processor for the ARPA Computer Network, Proc. AFIPS Conf., vol. 36, AFIPS Press, 1970, pp. 299-310. 5. Doing new releases across the network became possible once an IMP at BBN was on the network. A new release was loaded into the BBN IMP; then each neighboring IMP of the BBN IMP was instructed to reload from the BBN IMP, and then a neighbor of those IMPs reloaded, and so on across the network. 6. A.A. McKenzie et al., The Network Control Center for the Arpanet, Proc. 1st Intl Conf. Computer Comm., S. Winkler, ed., ACM Press, 1972, pp. 185-191. 7. W.R. Crowther and R.E. Kahn, Flow Control in a Resource-Sharing Computer Network, IEEE Trans. Comm., vol. COM-20, no. 3, 1972, pp. 536-546. 8. S. Ornstein et al., Users Guide to the Terminal IMP, tech. report 2183, BBN, originally drafted by W.R. Crowther and D.C. Walden, July 1972. 9. BBN was awarded the IEEE 1999 Corporate Innovation Recognition award for its development of the IMP and TIP. 10. D.C. Walden, Later Years of Basic Computer and

14.

15.

16.

17.

18.

19.

20. 21.

22.

23.

24. 25.

Software Engineering, A Culture of Innovation: Insider Accounts of Computing and Life at BBN, D. Walden and R. Nickerson, eds., to be published. J.M. McQuillan, I. Richer, and E.C. Rosen, The New Routing Algorithm for the Arpanet, IEEE Trans. Comm., vol. COM-28, no. 5, 1980, pp. 711-719. D.G. Bobrow et al., TENEX, a Paged Time Sharing System for the PDP-10, Proc. 3rd ACM Symp. Operating System Principles, ACM Press, 1971, pp. 135-143. C.S. Carr, S.D. Crocker, and V.G. Cerf, Host-Host Communication Protocol in the ARPA Network, Proc. AFIPS Spring Joint Computer Conf., AFIPS Press, vol. 36, 1970, pp. 589-597. A. McKenzie, Initial Connection Protocol, Internet Request for Comments 93 (RFC 93), Jan. 1971; http://www.rfc-editor.org/rfc/rfc93.txt. J. Davidson et al., The Arpanet TELNET Protocol: Its Purpose, Principles, Implementation, and Impact on Host Operating System Design, Proc. ACM/IEEE 5th Data Comm. Symp., 1977, pp. 4:10-18; B.P. Cosell and D.C. Walden, Development of Telnets Negotiated Options, IEEE Annals of the History of Computing, vol. 25, no. 2, 2003, pp. 80-82. A. McKenzie (for example, RFC 281 and RFC 454) and N. Neigus (for example, RFC 542) of BBN were particularly active with FTP. S. Levy, History of Technology Transfer at BBN, IEEE Annals of the History of Computing, vol. 27, no. 2, 2005, pp. 34-35. Inside BBN, Bob Hinden strongly argued that BBN should suggest to DCA to use Ethernet rather than X.25 as the interface standard for Arpanet. Had Hindens suggestion been followed, Ethernet and Ethernet switching technology might look very different than they do today. Some of the story of the transition from the Arpanet to the Defense Data Network and the Internet is also told by A.A. McKenzie and D.C. Walden, The Arpanet, The Defense Data Network, and The Internet, Encyclopedia of Telecommunications, vol. 1, Marcel Dekker, 1991, pp. 341-346. Where Wizards Stay Up Late, pp. 176-186. V. Cerf and R. Kahn, A Protocol for Packet Network Interconnection, IEEE Trans. Comm., vol. COM-22, 1974, pp. 637-684. S.C. Buttereld, R.D. Rettberg, and D.C. Walden, The Satellite IMP for the ARPA Network, Proc. 7th Hawaii Intl Conf. System Sciences, IEEE CS Press, 1974, pp. 70-73; see also the Radio and wireless section in this article. J. Burchel, R.S. Tomlinson, and M. Beeler, Functions and Structure of a Packet Radio Station, Proc. AFIPS Conf., vol. 44, AFIPS Press, 1975, pp. 245-251. Inventing the Internet, p. 131. V.G. Cerf et al., Proposal for an Internetwork End-to-End Protocol, ACM SIGCOMM Computer

68

IEEE Annals of the History of Computing

Comm. Rev., vol. 6, no. 1, 1976, pp. 63-89. 26. Exactly how IP was invented is not documented. It apparently happened in a hallway during a Network Working Group meeting at USC ISI sometime in 1977, according to a personal communication from David Reed on 14 July 2004. 27. There was a sense that the work on TCP was an implicit criticism of the existing work done on Arpanet. 28. TENEX was the rst major operating system to use paging and supported innovative features such as clear distinction between operating system and applications, command line completion, and le versioning. TENEX was created and maintained by BBN and having easy access to the source of a (widely used) operating system was an important resource for BBNs networking research. See Ref. 12. 29. R.S. Tomlinson, Selecting Sequence Numbers, Proc. ACM SIGCOMM/SIGOPS Interprocess Comm. Workshop, ACM Press, 1975, pp. 11-26. For a good discussion of the sequence number problem as applied to routing protocols, see R. Perlman, Interconnections: Bridges, Routers, Switches, and Internetworking Protocols, 2nd ed., AddisonWesley, 1999, pp. 310-317. 30. As part of this work, Plummer wrote several inuential notes on TCP implementation. However, oddly enough, he is best remembered for his vigorous but unsuccessful attempt to keep Rubber EOLs in the TCP specication. 31. This last idea was borrowed from work on Open Systems Interconnection protocol implementation by Tom Blumer. 32. R. Gurwitz and R. Walsh, Converting the BBN TCP/IP to 4.2BSD, Proc. 1984 Summer Usenix Conf., Usenix Assoc., 1984, pp. 52-61. 33. D. Waitzman, C. Partridge, and S.E. Deering, Distance Vector Multicast Routing Protocol, RFC 1075, 1 Nov. 1988; http://www.rfceditor.org/rfc/rfc1075.txt. 34. While this is Charlie Lynns only mention in the article, his contributions were far bigger. Charlie was one of BBNs best implementers for three decades. He specialized in nding ways to implement complex networked systems and served as a mentor to a number of the people mentioned in this article. He passed away unexpectedly in 2004. 35. D. Deutsch, R. Resnick, and J. Vittal, Specication of a Draft Message Format Standard, tech. report 4486, BBN, 1980. 36. R.D. Rettberg and D.C. Walden, Gateway Design for Computer Network Interconnection, Proc. Eurocomp (The European Computing Conf. Comm. Networks) 1975, Online Conferences Ltd., 1975, pp. 113-128. 37. M. Beeler et al., Gateway Design for Computer Network Interconnection, invited presentation, Proc. AFIPS 1976 Natl Computer Conf. and Exposi-

38.

39.

40.

41.

42.

43. 44.

45.

46.

47.

tion, AFIPS Press, 1976; V. Strazisar, Gateway Routing: An implementation Specication, Internet Eng. Note 30 (IEN 30), Apr. 1978; V. Strazisar, How to Build a Gateway, IEN 109, Aug. 1979. R. Hinden and A. Sheltzer, The DARPA Internet Gateway, RFC 823, Sept. 1982; http://www. rfc-editor.org/rfc/rfc823.txt. By 1983, Mike Brescia had written a memo known informally as Mikes Instructions For Building Your Own Gateway which described in detail what type of LSI-11 to order, what network cards to order, and how to get a software tape from Mike. BBNs parallel processing computer R&D from the Pluribus through the Buttery and beyond is sketched by D. Walden, Basic Computer and Software Engineering, Part 1, A Culture of Innovation: Insider Accounts of Computing and Life at BBN, D. Walden and R. Nickerson, eds., to be published. Described in T. Mallory, SPF Routing in the Buttery Gateway, Proc. April 22-24 1987 Internet Eng. Task Force; Sixth IETF, P. Gross ed., IETF, 1987; http://www3.ietf.org/proceedings/prior29/IETF06. pdf. R. Rettberg felt that it would be efcient to combine the boot load device and the console into a single platform and proposed using small Macintosh computers for this function. One result was that the Buttery loaded its boot image over the Macs very slow serial port. R. Hinden led the team that designed the initial prototype, code-named Emerald. C. Partridge et al., A Fifty Gigabit Per Second IP Router, IEEE/ACM Trans. Networking, vol. 6, no. 3, 1998, pp. 237-248. Request for Quotation, Dept. of the Army Defense Supply ServiceWashington, July 29, 1968; a 55-page document distributed on behalf of the Advanced Research Projects Agency. Interface Message Processors for the ARPA Computer Network, tech. report, BBN, Quarterly Technical Report, 1969. W.R. Crowther et al., The Interface Message Processor for the ARPA Computer Network, Proc. AFIPS Conf., vol. 36, 1970, AFIPS Press, pp. 551-567; D. Walden, The Interface Message Processor, Its Algorithms, and Their Implementation, invited lecture, Journees DEtude: Rseaux de Calculateurs, Association Franaise pour la Cyberntique conomique et Technique [Workshop on Computer Networks, French Society of Economic and Technical Cybernetics (AFCET)], Paris, 1972; D. Walden, Routing (A Memorandum), Proc. 1974 Intl Seminar on Performance Evaluation of Data Processing Systems, Weizmann Inst. of Science, pp. 429-433; J.M. McQuillan, Adaptive Routing Algorithms for Distributed Computer Networks, tech. report 2831, BBN, 1974, a

JanuaryMarch 2006

69

Data Networking at BBN

48.

49.

50.

51. 52.

53.

54.

55.

56.

57.

58.

reprint of McQuillans Harvard PhD thesis; J.M. McQuillan and D.C. Walden, The ARPA Network Design Decisions, Computer Networks, vol. 1, no. 5, 1977, pp. 243-289. It is also known as the distributed Bellman-Ford algorithm, although BBNs parallel algorithm for Arpanet had little similarity to Bellmans or Fords nonparallel algorithm (D. Walden, The BellmanFord Algorithm and Distributed Bellman-Ford; http:// www.walden-family.com/public/bfhistory.pdf). Many of these features were foreseen by team members W.R. Crowther and R.E. Kahn (see Ref. 7), but the initial algorithms probably were the best the team could devise within the time available for initial network deployment. J.M. McQuillan, Adaptive Routing Algorithms for Distributed Computer Networks, tech. report 2831, BBN, 1974; a reprint of McQuillans Harvard PhD thesis. R. Perlman, Interconnections. E.C. Rosen, Exterior Gateway Protocol, RFC 827, Oct. 1982. Much of the credit for EGP is shared with David Mills, then of Linkabit Corp. Mills took Rosens somewhat sketchy specication of EGP and made it concrete and stable (see D.L. Mills, Exterior Gateway Protocol Formal Specication, RFC 904, April 1984; http://www.rfc-editor.org/rfc/ rfc904.txt. One of the authors, Partridge, had a characteristic encounter with Brescia. Partridge was debugging a new network management protocol implementation and, to see if he could use it to retrieve data from a local router, sent a few packets to the router. On getting no reply, he reread his code for bugs, then tried again. Still no answer from the router but the phone rang. It was Brescia calling to say that Partridges implementation had two bytes reversed in the protocol header and to please x it. D. Comer, The Computer Science Research Network CSNET: A History and Status Report, Comm. ACM, vol. 26, no. 10, pp. 747-753. In the 1980s, there were several competing email networks and getting the email between networks required considerable skill (see J.S. Quartermann, The Matrix, Digital Press, 1990). Charlotte Mooers, CSnets postmistress became internationally known for her skill in getting email to the right place. C. Partridge, Mail Routing and the Domain Name System, RFC 974, Jan. 1986; http://www.rfc-editor. org/rfc/rfc974.txt. L. Lanzillo and C. Partridge, Implementation of Dial-up IP for UNIX Systems, Proc. 1989 Winter Usenix Conf., Usenix Assoc., pp. 201-208. One of Rugos marketing challenges was nding out who in a high-tech company was running their network. This was before the days of chief

59.

60.

61.

62.

63.

64.

65.

66.

67.

technical ofcers (CTOs). So Rugo took to calling each companys CEO, on the assumption that hed get their assistant, who could tell him who ran the companys network. Some of his most memorable marketing calls occurred when he actually got put directly through to the CEO of a major high-tech company. NEARnet chose to spend its NSF start-up money on capital equipment, while most regional networks chose instead to use the money to reduce what they charged customers. When the money ran out, many regionals were left with an infrastructure in need of upgrade and customers suffering sticker shock. Probably, in part, because some key operations people were at Planet including Dan Long, John Curran (ex-CSnet and Planets CTO), Mike Brescia, and Steve Blumenthal (manager of the group that ran the Wideband network and several other experimental networks and who succeeded Curran as CTO). D. Greeneld, North American ISP Survey: Looking for Number Two, Network Magazine, Sept. 2002; http://www.itarchitect.com/article/ NMG20020822S0001. BBNs 1996 10K ling with the Securities and Exchange Commission (SEC) shows that Planets revenues (closely tied to customers and bandwidth) more than doubled between June 1994 and June 1995, and more than quadrupled between June 1995 and June 1996. A brief comment on the economics of hypergrowth in the 1990s may be useful. Deploying new networking equipment and circuits typically took 6 to 12 months from initial ordering until completed installation. So an ISP had to guess how big it would be, a year in advance. The penalty for underestimating was crushingloss of market share, reduced revenue stream, devalued stock. The penalty for overestimation was mildthe excess capacity would be consumed in the next year, provided hypergrowth continued. BBNs two biggest competitors in this period were UUnet and PSI. PSI was the most aggressive in pursuing a grow at all costs policy, while UUnet (until it was acquired by WorldCom) was the most scally conservative of the three competitors. I.M. Jacobs, R. Binder, and E.V. Hoversten, General Purpose Packet Satellite Networks, Proc. IEEE, vol. 66, no. 11, pp. 1448-1467. G. Falk et al., Integration of Voice and Data in the Wideband Packet Satellite Network, IEEE J. Selected Areas in Comm., vol. SAC-1, no. 6, 1983, pp. 1076-1083. W. Edmond et al., The Buttery Satellite IMP for the Wideband Packet Satellite Network, Proc. ACM SIGCOMM, ACM Press, 1986, pp. 194-203. This was not solely a BBN effort or problem. Link-

70

IEEE Annals of the History of Computing

68.

69.

70.

71.

72.

73.

74.

75.

76.

77.

78.

79.

abit (the provider of ground modems) and Western Union (which owned the satellite) had their own challenges to overcome to make the Wideband Net a success. C. Topolcic, Experimental Internet Stream Protocol: Version 2 (ST-II), RFC 1190, 1 Oct. 1990; ftp:// ftp.rfc-editor.org/in-notes/rfc1190.txt. M. Bergamo, ACTS Gigabit Satellite Network Study: Satellite Beam Switched TDMA Networking and Support for SONET Interfaces, tech. report 7574, BBN, 29 Mar. 1991. M. Bergamo, Network Architecture and SONET Services in the NASA/DARPA Gigabit Satellite Network Using NASAs Advanced Communications Technology Satellite, Proc. 15th AIAA Intl Comm. Satellite Systems Conf., AIAA, 1994, pp. 208-216. M. Bergamo and D. Hoder, Gigabit Satellite Network for NASAs Advanced Communications Technology Satellite (ACTS), Intl J. Satellite Comm., vol. 14, no. 3, 1996, pp. 161-173. R. Ramanathan and J. Redi, A Brief Overview of Ad Hoc Networks: Challenges and Directions, IEEE Comm. Magazine, 50th anniversary commemorative issue, May 2002, pp. 20-22. R.E. Kahn et al., Advances in Packet Radio Technology, Proc. IEEE, vol. 66, no. 11, 1978, pp. 1468-1496; J. Burchel; R.S. Tomlinson, and M. Beeler, Functions and Structure of a Packet Radio Station, AFIPS Conf. Proc., vol. 44, AFIPS Press, 1975, pp. 245-251. Others who participated in the project at BBN included Jil Westcott, Ray Tomlinson, Radia Perlman, Don Allen, Mike Beeler, Virginia (Strazisar) Travers, and Greg Lauer. J. Burchel, R.S. Tomlinson, and M. Beeler, Functions and Structure of a Packet Radio Station, AFIPS Conf. Proc., vol. 44, 1975, pp. 245-251. J. Jubin and J.D. Tornow, The DARPA Packet Radio Network Protocols, Proc. IEEE, vol. 75, no. 1, 1987, pp. 21-32; other participants playing long-term roles in this program included Jil Westcott and Greg Lauer. NTDR was the rst real-life ad hoc network, and was used by the Fourth Infantry Division in the recent Iraq war. S. Ramanathan and M. Steenstrup, Hierarchically Organized, Multihop Mobile Networks for Multimedia Support, ACM/Baltzer Mobile Networks and Applications, vol. 3, no. 1, pp. 101-119; K. Kasera and S. Ramanathan, A Location Management Protocol for Hierarchically Organized Multihop Mobile Networks, Proc. IEEE Intl Conf. Universal Personal Communication (ICUPC 97), IEEE Press, 1997. R. Ramanathan, On the Performance of Ad Hoc Networks Using Beamforming Antennas, Proc. ACM MobiHoc 2001, ACM Press, Oct. 2001, pp. 95-105.

80. J. Redi and R. Ramanathan, Utilizing Directional Antennas for Ad Hoc Networks, Proc. IEEE Milcom, IEEE Press, 2002. 81. J. Redi and B. Welsh, Energy Conservation for Tactical Mobile Robots, Proc. IEEE Milcom, 1999, IEEE Press; J. Redi et al., JAVeLEN: An Ultra-Low Energy Ad Hoc Wireless Network, submitted for publication. 82. R. Ramanathan and R. Hain, Topology Control of Multihop Radio Networks Using Transmit Power Adjustment, Proc. IEEE Infocom, IEEE Press, 2000, pp. 404-413. 83. R. Ramanathan and R. Hain, An Ad Hoc Wireless Testbed for Scalable, Adaptive QoS Support, Proc. IEEE Wireless Comm. and Networking Conf., vol. 3, IEEE Press, 2000,pp. 998-1002. 84. R. Ramanathan, Making Ad Hoc Networks Density Adaptive, Proc. IEEE Milcom, IEEE Press, 2001, pp. 957-961. 85. C. Santivanez, R. Ramanathan, and I. Stavrakakis, Making Link-State Routing Scale for Ad Hoc Networks, Proc. ACM MobiHoc, ACM Press, 2001, pp. 22-32.

Craig Partridge is Chief Scientist for Internetworking at BBN Technologies, where he has worked on data networking problems since 1983. He is best known for his work on email routing, TCP round-trip time estimation, and high-performance router design. He received an MSc and a PhD, both in computer science, from Harvard University. Partridge is the former editor in chief of IEEE Network Magazine and ACM Computer Communication Review and is an IEEE Fellow. Steven Blumenthal is the CTO at BridgePort Networks, a venture-backed start-up developing systems to provide seamless roaming between wireless LANs and cellular carrier networks. Blumenthal was with BBN from 1977 through 1997, where he worked on and led DARPA projects in packet voice/video conferencing, satellite packet switching, and the engineering and buildout of IP networks for AOL and BBN Planet. Later, at GTE and Genuity, he led the engineering of GTEs nationwide fiber-optic backbone and the development of Internet services. He has a BS and an MS in electrical engineering and computer science from MIT. Readers may contact Craig Partridge about this article at craig@bbn.com.

JanuaryMarch 2006

71

You might also like