Wednesday 22 November 2006

History and Evolution of the Internet Part III

This ensured that individual network would retain their specific character while having access to a larger community of computers. The second feature was the establishment of a ‘gateway’ within each network. This would be its means of linking up to the larger network outside of itself. Basically, this ‘gateway’ would be in the form of a larger computer with the capability of handling large volumes of traffic and a software which would enable it to redirect and transmit data ‘packages’. Also, another peculiarity of this ‘gateway’ would be that it would retain no memory of the data being transferred and transmitted through it. While this was basically designed to cut down on the workload of this computer, it had the added advantage of the deterring of any sort of censorship or control of the traffic.

Data packages were to be transferred through the fastest accessible path. For example, if one of the computers in the network was either slow or blocked, packages would be rerouted through other computers until it reached its final destination. Any gateway linking different sorts of networks together would have to, perforce, always remain open. Also, it could not discriminate between the traffic being routed through it and out of it. The implicit principle of this sort of an ‘open architecture’ system, of course, is that the underlying operating principles of the network are accessible to all the networks participating in it. Thus, immediately, it democratizes the whole organization. Thus since the basic design information to design such an interconnected network was available to any individual or organization, theoretically any new network could be easily linked up to the existing one. This feature would, later, enable a range of technological innovations in the internet.

We need to keep in mind that at this point in the history of the internet; we are basically talking about huge mainframe computers only. These machines were not accessible to the public and were largely owned by huge corporations, universities or government organizations. We are still far away from the user friendly World Wide Web of today. Initially it was thought that this kind of a system would ultimately depend only a select few national or sub-networks.

By this time, a number of independent computer networks had come into being. Some of the more important of these were the Telenet developed by Stanford in 1974. This was also the first communication network which was available to the public at large and functioned somewhat like the commercial version of the ARPANET. Markedly different in character was the network developed by the US Department of Energy. It was called the MFENet and was meant to facilitate research into Magnetic Fusion Energy. This spurred NASA to develop its own SPAN for the use of space physicists. 1976 saw the expansion of network to reach the larger academia network. This year saw the development of a UNIX-to-UNIX based protocol by AT&T Bell laboratories. This was because they provided free access to this software to all UNIX users and the fact that UNIX was the main operating system employed by the academia of the time.

Further developments were the establishment of the still operational Usenet in 1979 and Bitnet (by the City University New York) in 1981. The US National Science Foundation funded the development of CSNet to enhance communication between computer scientists situated in disparate locations like the government, industry or the university. Throughout we have only talked about the development taking place in integrated computer networking in the United States only. This is not to mean that similar experimentation was being carried outside the boundaries of the US. In 1982, Eunet was launched. This was primarily a European adaptation of the American UNIX network. It linked together networks based in the UK, Netherlands and Scandinavia. EARN (European Academia and Research Network) was established in 1984 and was modeled on the Bitnet.