History Of Internet

0
()

History Of Internet

Brief History of the Internet 1997 Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff. Introduction The Internet has revolutionized the computer and communications world like never before. The invention of the telegraph, telephone, radio, and computer sets the stage for this unprecedented integration of capabilities.

The Internet is both a global transmission capability, a mechanism for broadcast information , and a medium for collaboration and interaction between individuals and their computers regardless of geographic location. Internet represents one of the most successful examples of the benefits of a sustained investment and commitment to research and development of information infrastructure. Starting with the first packet switching research , the government, industry, and academia have been partners in the evolution and deployment of this exciting new technology. Today, terms like “bleiner@computer.org” and “http://www.acm.org” travel slightly out of the language of the random person on the street. It is intended to be a brief, necessarily superficial and incomplete story. Much material currently exists on the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of written material on the Internet.2 The En este artículo, 3 varios de nosotros participamos en el desarrollo y la evolución de la Internet shares our views of its origins and history. This story revolves around four different aspects. There’s the technological evolution that started with the first research on packet switching and ARPANET (and related technologies), and where current research continues to expand infrastructure horizons across various dimensions, such as scale, performance, and functionality of Upper level. There is the complex and global aspect of operations and management. infraestructura operativa There is the social aspect, which resulted in a wide community of Internet users. working together to create and evolve technology.

And there is the commercialization aspect, resulting in an extremely effective Transition from research results to broad implementation information infrastructure available . The Internet today is a generalized information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Their la historia es compleja y involves many aspects: technological, organizational and community. And it is an influence that not only reaches the technical fields of computer communications, but throughout society as we move towards Increased use of online tools to achieve e-commerce, information acquisition and community operations. 1 Perhaps this is an exaggeration based on the principal author’s residence in Silicon Valley. 2 On a recent trip to a Tokyo bookstore, one of the authors counted 14 in English. language magazines dedicated to the Internet. 3 A shortened version of this article appears in the CACM 50th Anniversary Edition , February 97. The authors wish to express their appreciation to Andy Rosenbloom, CACM Senior Editor, for instigating both the writing of this article and their invaluable assistance in the edition of this and the abridged version. Orígenes de internet The first recorded description of social interactions that could be enabled through networks was a series of notes written by JCR Licklider of MIT in Agosto de 1962 discutiendo su concepto de “Red Galáctica”. Él imaginó una global conjunto interconectado de computadoras a través del cual todos podían acceder rápidamente data and programs on any site. In spirit, the concept was very much like today’s Internet. Licklider was the first head of the computer research program. at DARPA, 4 as of October 1962. While at DARPA he convinced his DARPA successors , Ivan Sutherland, Bob Taylor and MIT researcher Lawrence G. Roberts, of the importance of this network concept. Leonard Kleinrock at MIT published the first article on packet exchange theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was an important step on the road to the computer network. The other key step was to get the computers to talk together. To explore this, in 1965 working with

Thomas Merrill, Roberts connected the TX-2 computer in Mass. To the Q-32 in California with a low-speed phone line creating the first wide-area (albeit small) computer network ever built . The result of this experiment was the understanding that timeshare computers could They work well together, run programs, and retrieve data as needed on the remote machine, but that the circuit-switched telephone system was totally unsuitable for the job. Kleinrock’s conviction of the need to change packages was confirmed. In late 1966 Roberts went to DARPA to develop the concept of the computer network and quickly developed his plan for “ARPANET”, publishing it in 1967. At the conference where he presented the document, there was also a document in a network concept package from the United Kingdom by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about NPL’s work, as well as Paul Baran’s and others in RAND. The RAND group had written a document about changing packets networks for secure voice in the army in 1964. It happened that work at MIT (1961-1967), at RAND (1962-1965) and at NPL (1964-1967) had all proceeded in parallel without any of the investigators knowing about the other work. The word “Package” was adopted from the work in NPL and the proposed line speed will be used in the ARPANET design. It was updated from 2.4 kbps to 50 kbps.5 In August 1968, after Roberts and the DARPA-funded community had refined the general structure and specifications for ARPANET, an RFQ was launched by DARPA for the development of one of the key components, the packet switches called interface message processors (IMP). The RFQ was won in December 1968 by a group led by Frank Heart at Bolt Beranek and Newman (BBN). While the BBN team was working on IMPs with Bob Kahn playing an important role in ARPANET’s overall architectural design, the network topology and economics were designed and optimized by Roberts 4 The Advanced Research Projects Agency (ARPA) changed it was named to Defense Advanced Research Projects Agency (DARPA) in 1971, then returned to ARPA in 1993, and returned to DARPA in 1996. We refer to DARPA, the current name. 5 It was from the RAND study that the false rumor began to claim that ARPANET was somehow related to building a network resistant to nuclear war. This was never true for ARPANET, only the unrelated RAND study on safe voice considered nuclear war. However, subsequent work on the Internet emphasized robustness and survivability, including the ability to withstand losses from large portions of the underlying networks. working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock’s team at UCLA.6 Due to Kleinrock’s early development of packet exchange theory and its approach In analysis, design and measurement, its Network Measurement Center at UCLA was selected to be the first node in ARPANET. All of this happened in September 1969 when BBN installed the first IMP at UCLA and connected the first host computer. Doug Engelbart’s project on “Increasing Human Intellect” (which included NLS, an early hypertext system) at the Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, run by Elizabeth (Jake) Feinler, and includes functions such as maintaining hostname tables for address mapping as well as an RFC directory.

A month later, when SRI connected to ARPANET, the first host-to-host message was sent from Kleinrock’s lab to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB researching methods to display mathematical functions using sample storage to deal with the problem of updating over the network, and Robert Taylor and Ivan Sutherland in Utah investigating methods of three-dimensional representations in the network. Thus, by the end of 1969, four hosts the computers connected to each other on the initial ARPANET, and the fledgling Internet was off the ground. Even at this early stage, it should be kept in mind that networking research incorporated both working on the underlying network and working on how to use the network.

This tradition continues this day. Computers were quickly added to ARPANET over the following years, and work continued to complete a functionally complete host-to-host protocol and other software network . In December 1970 the Network Working Group (NWG) working under S. Crocker completed the initial ARPANET Host-to-host protocol, called Network Control Protocol (NCP). As the ARPANET Sites completed implementing NCP during the period 1971-1972, network users were finally able to start developing applications. In October 1972, Kahn organized a large and successful ARPANET demonstration at the International Conference on Computer Communication (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial “hot” application, email was introduced. In March Ray Tomlinson of BBN wrote the core article. send email y lea el software, motivado por la necesidad de los desarrolladores de ARPANET de coordination mechanism. In July, Roberts expanded his usefulness by writing the first email. utility to selectively enumerate, read, archive, forward, and reply to messages. From there email took off as the largest Network Application for over a decade. This was a harbinger of the type of activity we see on the World Wide Web today, namely the tremendous growth of all kinds of “People-to-People” Trafficking. 6 Including, among others, Vint Cerf, Steve Crocker and Jon Postel. Union later were David Crocker, who was to play a major role in documenting email protocols, and Robert Braden, who developed the first NCP and then TCP for the IBM mainframes and was also to play a long-term role in the ICCB and IAB. The initial concepts of the Internet The original ARPANET became the Internet. The Internet was based on the idea that there would be multiple independent networks of fairly arbitrary design, starting with ARPANET as the pioneering packet-switched network, but it will soon include satellite packet networks, packet radio ground networks, and other networks. The Internet, as we now know it, represents a key underlying technical idea, that is, open network architecture. In this approach, the choice of any individual network technology was not dictated by a particular network architecture, but could be freely selected by a provider and made to interwork with the other networks through a meta-level “Internetworking Architecture”. Until then there was only one general method to federate networks. This was the traditional circuit. transposed method where networks would interconnect at the circuit level, synchronously passing individual bits along an end-to-end part of a circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with the package Switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that it be used as a component of the other, rather than acting as a partner to the other in offering end-to-end service. In an open architecture network, the individual networks can be designed separately and developed and each can have its own unique interface that it can offer to users and / or other providers. including other internet providers. Each The idea of ​​open architecture. networking was first introduced by Kahn shortly after arriving at DARPA in 1972. The network can be designed according to the specific environment and user requirements of that network. There are generally no restrictions on the types of network that can be included or on its geographic scope, although certain pragmatic considerations will dictate what makes sense to offer. The idea of ​​open architecture networking was first introduced by Kahn shortly after coming to DARPA in 1972. This work was originally part of the packet radio program, but later became a separate program in itself Right . At that time, the program was called “Internet”. Key to do the The work of the packet radio system was a reliable end-to-end protocol that could maintain effective communication against interference and other radio interference, or withstand intermittent blackouts such as those caused by being in a tunnel or blocked by local terrain. Kahn first considered developing a protocol local only to the radio packet network, as that would avoid having to deal with a multitude of different operating systems and continue to use NCP. However, NCP did not have the ability to address networks (and machines) lower than a destination IMP in ARPANET, and therefore some would also be required to switch to NCP. (The ARPANET was supposed to was not modifiable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If packets were lost, the protocol (and presumably any applications it supported) would stop completely. In this model, NCP had no end-to-end host error control, since ARPANET was to be the only network in existence and would be so reliable that no error control would be required from the hosts.

Therefore, Kahn decided to develop a new version of the protocol that could meet the needs of an ambient open architecture network . This protocol would eventually be called the Transmission Control Protocol / Internet Protocol (TCP / IP). While NCP tended to act as a device driver, the new protocol would be more like a communications protocol. Four basic rules were critical to Kahn’s initial thinking: Each different network would have to be independent, and no internal changes to any network could be required to connect it to the internet. Communications would be done with the best effort. If a packet does not reach the final destination, it will shortly be retransmitted from the source. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information withheld by gateways about individual flows of packages that pass through them, thus keeping them simple and avoiding the complicated adaptation and recovery of various failure modes. There would be no global control at the operations level. Other key issues that needed to be addressed were: • Algorithms to prevent lost packets from permanently disabling communications and allowing them to be successfully relayed from the source. • Provide host-to-host “pipeline” so that multiple packets can be sent en-route from source to destination at the discretion of participating hosts, if intermediate networks allow this. • Gateway functions to allow you to forward packets properly. This includes interpreting IP headers for routing, interface handling, dividing packets into smaller parts if necessary, etc. • The need for end-to-end checksums, reassembly of packets from fragments, and detection of duplicates, if any. • The need for global addressing. • Techniques for controlling flow from host to host. • Interface with the various operating systems. There were other concerns as well, such as deployment efficiency, networking, performance, but these were secondary considerations at first. Kahn began working on a set of communications-oriented operating system principles. while at BBN and documented some of his early thoughts in a BBN internal memo entitled ” Principles Communications for Operating Systems”. At this point he realized that it would be necessary to learn the implementation details of each operating system in order to have an opportunity to embed any new protocol efficiently. So, in the spring of 1973, after starting the internet effort, Vint Cerf (then at Stanford) asked to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original design and development of NCP and already had the knowledge about Interface with existing operating systems. So armed with Kahn’s architectural approach to the communications side and Cerf’s PNC expertise, they came together to explain the details of what became TCP / IP. The give and take was highly productive and the first written version7 of the resulting approach was distributed at a special meeting of the International Networking Working Group (INWG) which was established at a conference at Sussex University in September 1973. Cerf had been invited to chairing this group and took the opportunity to hold an INWG meeting members who were strongly represented at the Sussex Conference. Some basic approaches emerged from this collaboration between Kahn and Cerf: • Communication between two processes would logically consist of a very long sequence of bytes (they were called octets).

SEE ALSO  Online Ads Network

The position of any octet in the sequence would be used to identify it. • Flow control would be done through sliding windows and acknowledgments (Acks) The destination could select when to acknowledge and each acknowledgment would be cumulative for all packets received up to that point. • The exact way in which the source and destination would agree on the window parameters was left open To be used. The default values ​​were used initially. • Although Ethernet was under development at Xerox PARC at the time, the proliferation of LANs were not anticipated at the time, let alone PCs and workstations. The original model was nationwide networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32-bit IP address was used of which the first 8 bits meant the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, clearly needed reconsideration when LANs began to appear in the late 1970s. The original Cerf / Kahn document on the Internet described a protocol, called TCP, that provided all the transport and forwarding services on the Internet. Kahn intended that the TCP protocol support a range of transport services, from Fully Reliable Sequenced Data Delivery (Virtual Circuit Model) to a datagram service in which the application made direct use of the underlying network service, which could occasionally involve lost, corrupt or reordered packages. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked well for file transfer and remote login applications, but some of the early ones working in advanced network applications, in particular voice packet in the 1970s, made it clear that in some cases TCP packet did not You must correct the losses, but it must be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP that provides only for routing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and data recovery. lost packets For those applications that do not want TCP services, an alternative called User Datagram Protocol (UDP) was added to provide access to the basic IP service. One of the main initial motivations for both ARPANET and the Internet was the resource. Sharing, for example, allowing users to packet radio networks to access the timeshare systems connected to the ARPANET. Connecting the two together was much cheaper than duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, email was probably It had the most significant impact of the innovations of that time. Email provided a new model for how people could communicate with each other and changed the nature of collaboration, first in building the Internet itself (as discussed below) and later for much of society. There were other applications proposed in the early days of the Internet, including packet-based voice communication (the forerunner of Internet telephony), various models of file and disk sharing, and the first “worm” programs showing the concept of agents. (and, of course, viruses). A key concept of the internet is that it was not designed for one application, but as a general infrastructure in which new applications could be conceived, as the emergence of the World Wide Web later illustrates . It is the general-purpose nature of the service provided by TCP and IP that makes this possible. 7 This was later published as VG Cerf and RE Kahn, “A Protocol for Packet Network Interconnection” IEEE Trans. Com. Tech., Vol. COM-22, V 5, pp. 627-641 , May 1974. Testing the ideas DARPA left three contracts with Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP / IP (it was simply called TCP in Cerf / Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification, and in about a year there were three separate TCP implementations that could interoperate. This was the beginning of long-term experimentation and development to evolve and mature Internet concepts and technology. Starting with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has been cultivated to incorporate essentially all forms of network and a very broad research and development community. [REK78] With each expansion has come new challenges. The earliest implementations of TCP were made for large timeshare systems like Tenex and TOPS 20. When desktop computers appeared, some thought that TCP was too large and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible.

They produced an implementation, first for Xerox Alto (the first personal workstation) developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with That implementation was fully interoperable with other TCPs, but it was adapted to the suite application and performance goals of the personal computer, and showed that workstations as well as large timeshare systems could be part of the Internet. Other TCPs, but tailored to the set of applications and performance goals of the personal computer, and showed that workstations, as well as long-time sharing systems, could be part of the Internet. In 1976 Kleinrock published The First Book on ARPANET. It included an emphasis on the complexity of protocols and the difficulties they often present. This book was influential in spread the tradition of packet switched networks to a very broad community. Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology on the Internet and PC and workstations the dominant computers. This change from having a few networks with a modest number of timeshare hosts (the original ARPANET model) to having many networks has resulted in a series of new concepts and changes in the underlying technology. First resulted in the definition of three classes of network (A, B, and C) to accommodate The range of networks. Class A represented large networks on a national scale (small number of networks with large numbers of hosts); Class B represented regional scale networks; and class C represented local area networks (large number of networks with relatively few hosts). A major change occurred as a result of the increase in the scale of the Internet and its associated management problems. To make it easier for people to use the network, hosts were assigned names, so there was no need to remember numerical addresses. Originally, there was a fairly limited number of hosts, so it was feasible to keep a single table of all hosts. and its associates names and addresses The switch to having a large number of managed networks (eg LAN) meant that having a single host table was no longer feasible, and The Domain Name System (DNS) was invented by Paul Mockapetris of USC / ISI. the permitted DNS A scalable distributed mechanism for resolving hierarchical host names (p. eg. www.acm.org) in an Internet address. The increase in size of the Internet also challenged the capabilities of routers. Originally, there was a unique distributed algorithm for routing that was implemented uniformly by all routers in The Internet. As the number of networks on the Internet exploded, this initial design could not be expanded as needed, so it was replaced by a hierarchical routing model, with an Internal Gateway Protocol (IGP) used within each region. The Internet and an External Gateway Protocol (EGP) used to tie regions together. This design allowed different regions to use a different IGP, so different cost, fast reconfiguration, robustness, and scale requirements could be accommodated. Not only the routing algorithm, but the size of the routing tables, emphasized the capabilities of the routers. New Approaches to address aggregation, particularly between classless routing domains (CIDR), recently introduced to control the size of router tables. As the Internet evolved, one of the main challenges was how to propagate changes to software, particularly host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including the addition of TCP / IP developed in BBN. Although Berkeley later rewrote the BBN code to fit more efficiently into the Unix and kernel system, the incorporation of TCP / IP into versions of the Unix BSD system proved to be a critical element in dispersion of the protocols to the research community. Much of the CS research community started using Unix BSD for their everyday computing environment. Looking back, the strategy of incorporating Internet protocols into a compatible operating system for the research community was one of the key elements in the successful widespread adoption of the Internet. One of the most interesting challenges was the transition from the ARPANET host protocol from NCP to TCP / IP as of January 1, 1983. This was a “flag day” style transition, requiring all hosts to be converted simultaneously or allow communication through rather ad-hoc mechanisms. This transition was carefully planned within the community for several years before it actually took place and was surprisingly smooth (but resulted in a button layout that said “I survived the TCP / IP transition”). TCP / IP was adopted as a defense standard three years earlier in 1980. This defense enabled to begin sharing on the DARPA Internet technology base and led directly to the eventual partition of the military and non-military communities. In 1983 ARPANET was being used by a significant number of Defense R&D and operational organizations. The transition from ARPANET from NCP to TCP / IP allowed will be split into a MILNET that supports operational requirements and a secondary ARPANET Research Needs. Therefore, in 1985, the Internet was already well established as a supporting technology. a broad community of researchers and developers, and was beginning to be used by other communities for daily Computer Communications. Email was widely used in various communities, often with different systems, but the interconnection between different email systems was demonstrating the utility of broad-based electronic communications between people.

Transition to Widespread Infrastructure At the same time that Internet technology was being experimentally validated and widely used among a subset of researchers in computing, other networks and networking technologies were being pursued. The usefulness of computer networks, especially email, demonstrated by DARPA and Defense Department contractors at ARPANET were not lost in other communities and disciplines, so by the mid-1970s computer networks had begun to emerge wherever funds could be found for this purpose. The United States. Department of Energy (DoE) established MFENet for its researchers at Magnetic Fusion Energy, whereupon DoE’s High Energy physicists responded by building HEPNet. NASA space physicists followed up with SPAN, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the computing community (academic and industrial) with an initial grant from US National Science. USA Foundation (NSF). AT & T’s free broadcast of the UNIX computer generated USENET operating system, based on UNIX’s integrated UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which Linked academic mainframe computers in an “email as card imagery” paradigm. With the exception of BITNET and USENET, these early networks (including ARPANET) were specifically designed, that is, they were intended for, and largely restricted to, closed communities of academics; Hence , there was little pressure for the individual networks to be compatible and, in fact, to a large extent they were not. Additionally, alternative technologies were sought in the commercial sector, including XNS from Xerox, DECNet and SNA from IBM. 8 remained for the British JANET (1984) and the NSFNET programs in the USA. USA (1985) to explicitly announce their intention to serve the entire higher education community, regardless of discipline. In fact, a condition for a US university. USA Receive NSF funding for an Internet connection was that “… the connection must be available to ALL qualified facility users .” In 1985 Dennis Jennings came from Ireland to spend a year at NSF leading the NSFNET Program. He worked with the community to help NSF make a critical decision: that TCP / IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide area network infrastructure to support the academic and research community at large, along with the need to develop a strategy to establish such infrastructure on an independent basis from direct federal government funds. Policies and strategies (see below) were adopted to achieve this end. NSF also chose to support DARPA infrastructure’s existing Internet organization , hierarchically arranged under the (then) Internet Activities Board (IAB). The public declaration of this choice was the joint authorship by the IAB’s Internet Engineering and Architecture Task Forces and by NSF’s Network Technical Advisory Group of RFC 985 (Requirements for Internet Gateways), which formally ensured interoperability of DARPA’s and NSF’s pieces of the Internet. In addition to the selection of TCP / IP for the NSFNET program, Federal agencies made and implemented several other policy decisions which shaped the Internet of today. • Federal agencies shared the cost of common infrastructure, such as trans-oceanic circuits. They also jointly supported “managed interconnection points” for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) built for this purpose served as models for the Network Access Points and “* IX” facilities that are prominent features of today’s Internet architecture. • To coordinate this sharing, the Federal Networking Council9 was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Coordinating Committee on Intercontinental Research Networking, CCIRN, to coordinate Internet support of the research community worldwide. • This sharing and cooperation between agencies on the Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA’s Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-metered-settlements basis. 8 The desirability of email interchange, however, led to one of the first “Internet books”:!% @ :: A Directory of Electronic Mail Addressing and Networks, by Frey and Adams, on email address translation and forwarding. 9 Originally named Federal Research Internet Coordinating Committee, FRICC. The FRICC was originally formed to coordinate US research network activities in support of the international coordination provided by the CCIRN. • Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-academic customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all . • On the NSFNET Backbone – the national-scale segment of the NSFNET – NSF enforced an “Acceptable Use Policy” (AUP) which prohibited Backbone usage for purposes “not in support of Research and Education.” The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and / or growth of “private”, competitive, long-haul networks such as PSI, UUNET, ANS CO + RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard’s Kennedy School of Government on “The Commercialization and Privatization of the Internet” – and on the “com-priv” list on the net itself. •

SEE ALSO  How To Monitor Someone's Call

In 1988, a National Research Council committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled “Towards a National Research Network“. This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway. • In 1994, a National Research Council report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled “Realizing The Information Future: The Internet and Beyond” was released. This report, commissioned by NSF, was the document in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think about its evolution. It anticipated the critical issues of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet. • NSF’s privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the now numerous, private, long-haul networks. The backbone had made the transition from a network built from routers out of the research community (the “Fuzzball” routers from David Mills) to commercial equipment. In its 8 1/2 year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. It had seen the Internet grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States. Such was the weight of the NSFNET program’s ecumenism and funding ($ 200 million from 1986 to 1995) – and the quality of the protocols themselves – that by 1990 when the ARPANET itself was finally decommissioned10, TCP / IP had supplanted or marginalized most other wide-area computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure . 10 The decommissioning of the ARPANET was commemorated on its 20th anniversary by a UCLA symposium in 1989. The Role of Documentation A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols . The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks. In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, 1998.

The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or at least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams. Over time, the RFCs have become more focused on protocol standards (the “official” specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the “documents of record” in the Internet engineering and standards community. The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems. Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering.

The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed – RFCs were presented by joint authors with common view independent of their locations. The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC. As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network‘s first role in information sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet. Formation of the Broad Community The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history beginning with the early ARPANET. The early ARPANET researchers worked as a close- knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various research programs, as the Internet started to evolve, the Network Working Group evolved into Internet Working Group. In the late 1970s, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies – an International Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Internet activity.

In 1983, when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of ​​the technology (eg routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces. It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair. After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical / engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the form of working groups.

This growth was complemented by a major expansion in the community. Not longer was DARPA the only major player in the funding of the Internet. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and there was a significant decrease in Internet activity at DARPA. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership. The growth continued, resulting in even further substructure within both the IAB and IETF. the IETF combined Working Groups into Areas, and designated Area Directors. An Internet Engineering Steering Group (IESG) was formed of the Area Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured so that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Force (IRTF) chaired by Postel, with the old task forces renamed as research groups. The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980’s and continuing to this day, the Internet grew beyond its primarily research roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and fair. This coupled with a recognized need for community support of the Internet eventually led to the formation of the Internet Society in 1991, under the auspices of Kahn’s Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI. In 1992, yet another reorganization took place. In 1992, the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Internet Society. A more “peer” relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibility for the approval of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Society, with the Internet Society taking on as a goal the provision of service and other measures which would facilitate the work of the IETF. The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers. A new coordination organization was formed, the World Wide Web Consortium (W3C). Initially led from MIT’s Laboratory for Computer Science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibility for evolving the various protocols and standards associated with the Web. Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues. Commercialization of the Technology Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP / IP into their products because they saw buyers for that approach to networking. Unfortunately, they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP / IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP / IP products. In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP / IP worked and what it still could not do well. the speakers came mostly from the DARPA research community who had both developed these protocols and used them in day-to-day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way discussion was formed that has lasted for over a decade. After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP / IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone’s products interoperated with all of the other products – even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology. In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP / IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP / IP suite in a mutually cooperative manner. The reason it is so useful is that it is composed of all stakeholders: researchers, end users and vendors. Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would allow the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP), HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near -term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based administration. In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a “commodity” service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications. History of the Future On October 24 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term “Internet”. “Internet” refers to the global information system that – (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions / follow-ons; (ii) is able to support communications using the Transmission Control Protocol / Internet Protocol (TCP / IP) suite or its subsequent extensions / follow-ons, and / or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein. The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the most recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to One should not conclude that the Internet has now finished changing. resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment. One should not conclude that the Internet has now finished changing. los Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide new services such as real time transport, in order to support, for example, audio and video current.

SEE ALSO  SEE HOW TO GENERATE 1MILLION GOOGLE ADSENSE SAFE TRAFFIC IN ONE MONTH

The availability of pervasive networking (ie, the Internet) along with powerful affordable computing and communications in portable form (ie, laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications. This evolution will bring us new applications – Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, eg broadband residential access and satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself. The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders – stakeholders now with an economic as well as an intellectual investment in the network. We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stakeholders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future. References P. Baran, “On Distributed Communications Networks”, IEEE Trans. Comm. Systems, March 1964. VG Cerf and RE Kahn, “A protocol for packet network interconnection”, IEEE Trans. Comm. Tech., Vol. COM-22, V 5, pp. 627-641, May 1974. S. Crocker, RFC001 Host software, Apr-07-1969. R. Kahn, Communications Principles for Operating Systems. Internal BBN memorandum, Jan. 1972. Proceedings of the IEEE, Special Issue on Packet Communication Networks, Volume 66, No. 11, November 1978. Guest editor: Robert Kahn, associate guest editors: Keith Uncapher and Harry van Trees) L. Kleinrock, “Information Flow in Large Communication Nets”, RLE Quarterly Progress Report, July 1961. L. Kleinrock, Communication Nets: Stochastic Message Flow and Delay, Mcgraw-Hill (New York), 1964. L. Kleinrock, Queueing Systems : Vol II, Computer Applications, John Wiley and Sons (New York), 1976 JCR Licklider & W. Clark, “On-Line Man Computer Communication”, August 1962. L. Roberts & T. Merrill, “Toward a Cooperative Network of Time-Shared Computers ”, Fall AFIPS Conf., Oct. 1966. L. Roberts,“ Multiple Computer Networks and Intercomputer Communication ”, ACM Gatlinburg Conf., October 1967. Authors Barry M. Leiner was Director of the Research Institute for Advanced Computer Science. I passed away in April 2003. Vinton G. Cerf is Vice President and Chief Internet Evangelist at Google. David D. Clark is Senior Research Scientist at the MIT Laboratory for Computer Science. Robert E. Kahn is President of the Corporation for National Research Initiatives. 17 www.internetsociety.org Leonard Kleinrock is Professor of Computer Science at the University of California, Los Angeles, and is Chairman and Founder of Nomadix. Daniel C. Lynch is a founder of CyberCash Inc. and of the Interop networking trade show and conferences. Jon Postel served as Director of the Computer Networks Division of the Information Sciences Institute of the University of Southern California until his untimely death October 16, 1998. Dr. Lawrence G. Roberts is CEO, President, and Chairman of Anagran, Inc. Stephen Wolff is CTO of Internet2.

How Useful Is This Post ?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

Spread the love

Be the first to comment

Leave a Reply

Your email address will not be published.


*