Vous regardez une version antérieure (v. /display/SymCom/Symbiotic+Computing+Narrative) de cette page.

afficher les différences afficher l'historique de la page

« Afficher la version précédente Vous regardez la version actuelle de cette page. (v. 13) afficher la version suivante »

Symbiotic Computing


The history of computers is a well-documented progression from enormous, expensive mainframes made for complex scientific and industrial use into tiny, cheap devices that are in our pockets, in our homes, and affect nearly every facet of our daily lives. However, the story of how computing evolved from the niche interest of scientists and mathematicians into ubiquitous, user-friendly machines involves a much deeper analysis of the events and influences that gave rise to computers existing in a symbiosis with humanity. The transition from room-sized mainframes programmed with punch cards into phones and tablets that are so intuitive toddlers can use them was neither fast nor easy, but a long process of triumphs and failures influenced greatly by the many brilliant people all over the country that came together to create something truly amazing. This underground technological revolution has forever changed the way we live, work, and communicate, and will continue to become more symbiotic with humanity as new technologies and ideas are developed. The purpose of this wiki is to demonstrate and analyze the processes, influences, and engines that drove these ideas into the modern age.

 

 

Project MAC


In the days when every computing resource was prohibitively expensive, batch processing was the dominant form of computing in businesses and universities everywhere. Yet as technology advanced and the limitations of batch processing became more and more apparent, people were looking for better ways to utilize computing. One solution was the then-radical idea of time-sharing, where multiple users could simultaneously use the same mainframe computer. While the new idea could not have been realized at the time with current machines, that did not stop one MIT lab from trying.

A view inside the Project MAC Laboratory

Formation

J.C.R. Lickliter, regarded by many as the visionary of interactive computing, was serving as the director of ARPA's Information Processing Techniques Office (IPTO) from 1962 to 1964. During this time, he was looking for a way to implement the then-radical idea of time-sharing, where multiple users could simultaneously use the same mainframe computer. He found a solution through his former colleague Robert Fano, a professor at MIT. Fano, along with Fernando J. CorbatóRobert Metcalfe, and others, proposed Project MAC (signifying both Multiple Access Computer and Machine Aided Cognition) to Lickliter, a laboratory focused on developing time-sharing and its applications and using that to research other applications of time-shared computers. Lickliter awarded the group with a $2 million grant from ARPA, with Fano serving as the director.

Research

Multics

Fano using a CTSS Terminal

Over the next few years, Project MAC would go on to become famous for its research on operating systems, artificial intelligence, and the theory of computation. Some of its most important developments were in time-sharing, one of the biggest reasons the lab was created. One of the first time-sharing systems, the Compatible Time-Sharing System (CTSS) was developed by Corbató and demonstrated at MIT before the formation of Project MAC, and was used at the lab during its operation and greately influenced the design of subsequent time-sharing systems. Most notably was Multics, the successor to CTSS. Developed at Project Mac, Multics would go on to be the basis of almost every other multiple-access system created. Furthermore, Multics would also inspire the development of Unix in 1969, a fundamental foundation of systems that continue to be used to this day.

Artificial Intelligence

An "AI Group" led by Marvin Minsky was a group of programmers and computer scientists integrated into Project MAC. They were interested in the problems of vision, mechanical motion and manipulation, and language, which they viewed as the keys to more intelligent machines. In addition, they had their own mainframe computer (a PDP-6, followed by a PDP-10) for which they developed their own time-sharing operating system, known as the Incompatible Time-Sharing System, or ITS, as a joke on the name of the CTSS. The AI group disagreed with the direction taken with Multics (particularly with the dicision to include powerful system security), so they developed ITS instead.

An IBM 7094, an example of one of the machines used at Project MAC

Separation of LCS and AI Lab

By the late 1960s, Minksy's AI group wanted more space to conduct their own research, and were unable to get satisfaction from the then project director Lickliter (who left ARPA for a brief stint at IBM before returning to MIT to become the Project MAC director). This, along with no small amount of university politics, led to a separate MIT AI Lab being formed in 1970. Minsky, along with many of his AI group colleagues, left Project MAC to join this new lab. Talented programmers, such as Richard Stallman (who would go on to write EMACS and launch the GNU project) flourished at the AI Lab during this time.

Most of the researchers who did not leave to join the AI Lab also left Project MAC to form the Laboratory for Computer Science and continued their research of operating systems, programming languages, distributed systems, and the theory of computation, and would continue to do so for the next thirty years.

A PDP-6, one of the computers used by the AI Lab

 

Arpanet


Origins of the Internet

Early Networking
The desire for a solution to the threat of a nuclear first strike during the cold war prompted the Air Force into building a communications system that would be able to survive attacks and maintain "proper command and control". The responsibility for realizing this system was put on the shoulders of the Air Force's "think tank," the Rand Corporation. Founded in 1946, the Rand Corporation was "an outgrowth of operations research efforts initiated during World War II." It attracted many talented minds and was active in the field of computer science research.
Paul Baran was a young engineer who joined the Rand corporation in 1959 and saw the need for a survivable communications system without an explicit contract from the Air Force. One of his first attempts at this survivable system was the "Go / No Go" system which was intended to provide control with redundant AM radio communication lines, so "...the loss of any single point in the network would not result in a critical failure." When the Air Force was presented with this idea they insisted they needed greater communications capacity.
Over the next three years Baran would formulate a new communications system "...that would combine survivability with high capacity." He planned to achieve this by using a distributed system where every node was capable of switching traffic, rather than having a hierarchical concentrated switching system. There were also as many as eight lines between each node. Baran also implemented cryptography and a message priority system.


Paul Baran
Data moved through this network in a manner called "message switching" or "store-and-forward switching." This meant entire messages, with destinations and sources, were moved through the network from one node to the other (each node receiving, storing and then forwarding the message) until the message reached its destination. Traffic could be mediated by storing messages until a line was less busy, therefore increasing the systems efficiency and reducing the possibility of data loss.
Various message switching systems were already in use by the DoD, but because of low transmission speeds the computers (at switching nodes) tended to be large and complex in order to store all the messages that would be stockpiled waiting to transmit. Baran believed that a faster transmission system would allow for cheaper computers, and therefore more nodes.
In contrast to the distributed system Baran had proposed, AT&T developed a decentralized communications system called AUTOVON. The AUTOVON system was designed by AT&T in 1961 as a simpler means to providing disaster survivability by decentralizing switching stations. Baran's proposed idea was different in the fact that it was distributed instead of decentralized. The AUTOVON system still had individual nodes responsible for serving hundreds of lines. Traffic in this system was also re-routed manually by operators sitting in the switching stations.

Packet Switching
The concept of switching small blocks of data was first explored by Paul Baran in the early 1960s. Independently, Donald Davies at the National Physical Laboratory in the UK had developed the same ideas.
Baran developed the concept of message block switching during his research at the RAND Corporation  for the US Air Force into survivable communications networks, first presented to the Air Force in the summer of 1961 as briefing B-265 then published as RAND Paper P-2626 in 1962. Baran's P-2626 paper described a general architecture for a large-scale, distributed, survivable communications network. The paper focuses on three key ideas: first, use of a decentralized network with multiple paths between any two points; and second, dividing complete user messages into what he called message blocks (later renamed packets); then third, delivery of these messages by store and forward switching.
Adoption of packet switching was slow however, as there is overhead when it comes to packet switching. The adoption was further slowed by the de facto standard means of communication until that point in time which was circuit based. In packet switching the computers at either ends have to go through the process of splitting data into packets and reforming the data from packets. Compounding the problems of overhead is that when the message was split each block has to have its own destination associated with it - instead of having one destination per entire message. Because individual packets could take different routes through the system there was the possibility of them coming in out of order which required more complex algorithms be used in the re-assembling process. The packet switching debate would continue on into the 1960's.

ARPANET
The Advanced Research Projects Agency would be the first to develop a large-scale packet switching network. The ARPANET project was headed by Lawrence Roberts, a former computer scientist at MIT who had been working on networking. After meeting with Roger Scantlebury at a computing symposium in Tennessee in 1967, Roberts was convinced to increase the bandwidth of the packet switching system from 9.6 to 56 kilobits per second . The additional bandwidth allowed for large amounts of traffic and reduced the need for complex expensive hardware at each node due to the decreased need to store packets before forwarding them. Through various encounters the ARPANET group were exposed to the ideas and techniques of Baran and Davies and they became convinced that packet switching was the way to go for ARPA.
In 1975 the APRANET was declared operational and control of it was taken by the Defense Communications Agency freeing ARPA funds to pursue other research projects. The network grew to over 200 hosts by 1981, with a new host being added every 12 days. As the network grew and became increasingly public (as well as connected to machines outside of the United States) the US Military decided to split the network, separating military machines from the research machines and placing them in their own MILNET. The MILNET could still communicate with the APRANET via email routed through gateways.


Log of first message sent via ARPANET - 10/29/1969
These early steps in networking laid the foundation for what would eventually become the internet. The concepts of distributed networking and packet switching as a means of large scale communication would become integral parts of the ARPANET and later the Internet. The internet has its roots in the basic foundations that were laid down here but grew to become something much larger and more complex than any of the early computer scientists who contributed to it could have predicted.

Creators of the Internet


The people who molded and created the framework for what we now call the Internet are Joseph C.R. Licklider, Paul Baran, Donald Davis, Louis Pouzin, Robert Taylor and Lawrence Roberts. Each one contributed important concepts that were necessary for nodes to connect. These main concepts were packet switching, TCP, TCP/IP, and IMP.

Before personal computers, timesharing was the main method of computing and shared many of the original concepts of networking. You had a mainframe computer, or server, and multiple users would connect via a remote terminal, or web browser, and make use of a slice of the processor, or bandwidth. Originally, only very few people were allowed to use time on the mainframe and you had to have a very good reason to use the system. Joseph C. R. Licklider was the first director of the Information Processing Techniques Office at DARPA and was a major proponent of seamless integration of computing into everyday life and that everyone should be able to make use of computers. Licklider put forth funding towards this directive and inspired others to continue his work.

Paul Baran was a researcher at RAND who was interested in US Military communications in the event of a nuclear strike. At that time, the longest form of communication was the long radio wave. If a nuclear strike were to occur, then this form of communication would have been rendered useless, because in order to traverse the earth, radio waves must bounce of its surface as well as the ionosphere. A nuclear strike would have altered the ionosphere temporarily in a way that would adversely affect radio transmissions. In order to overcome this limitation, Baran foresaw a distributed network where each node in the network would be capable of forwarding information to the next node in a chain to its destination, as well as dynamically work around nodes that went inoperable over the course of use. The network would be able to make use of directed signals as opposed to radio waves, because the concept of a distributed network negated the short range limitations of the medium. Baran even went so far as to envision a form of packet switching (he called it message blocks) that is one of the basic elements of today's Internet. Donald Davies at NPL (National Physical Laboratory in England) also came up with packet switching and is generally credited with the invention even though they both came up with the same concept at nearly the same time. NPL was able to implement a working network based on Davies’s idea and was an early proof of concept.

Donald Davies at a demonstration at The Royal Society in 1949, center

France also had its own research center developing a computer network and was ran by Louis Pouzin. His efforts produced the CYCLADES network which brought a number of new innovations to the networking concept that would later be adopted by the Internet. First, the CYCLADES network was divided into layers like an IP stack that unwinds the series of headers that surrounds data sent over a network today. Pouzin was able to simplify the design of his network layers by removing the need for reliable transmission between nodes. Instead of requiring the network to be reliable, he shifted that responsibility to the nodes themselves. If nodes wanted reliable transmissions, they would be required to detect missing packets and request a retransmission. These forms of transmissions without built in network reliability are the early forms of the UDP and TCP network protocols. CYCLADES emphasized the ability for networks to communicate with other networks, each acting as a subnet to the whole. This way CYCLADES could be expanded from its then small number of nodes. CYCLADES also solved the issue with store and forward methods of the non-end nodes in a network being expensive and invasive. They provided a direct hardware implementation for packet relay, which cut down on the time data would spend at a non-end node in transit as well as removed the need for each non-end node computer to delve into the contents of the packet to simply forward it. These same ideas are found in the design of the internet, which allow for dynamic expansion and subnets that are owned, operated, and controlled by private individuals while maintaining relatively fast speeds and privacy. In fact, it was from this inter-network communication that the term Internet first arose.

The International Organization for Standardization published its own model of a layered network similar to CYCLADES, but created for more general use. From this effort, the Open Systems Interconnection model was developed. The model was extremely comprehensive,  separating the transmission of data into seven layers, the Physical Layer, the Data Link Layer, the Network Layer, the Transport Layer, the Session Layer, the Presentation Layer, and the Application Layer. The OSI Model is the precursor to the TCP/IP Protocol stack.

The Defense Advanced Research Projects Agency had in the meantime been attempting to develop a network of its own, and in the process developed its own set of goals and innovations. Afterwards, the researchers at DARPA would also adopt the ideas and advancements of the other prototypical networks worldwide, and create the precursor that would grow into the internet itself, the ARPANET.

The two most influential men at DARPA were Robert Taylor and Lawrence Roberts. Taylor wanted all the various research groups around the world to work together so information would be shared, redundancy reduced, and costs reduced. Lawrence Roberts was responsible for a number of design decisions that would lead to the robust qualities of the ARPANET. Roberts made the decision to use packet switching as opposed to circuit switching in the ARPANET, despite opposition. The only other well proven US information network of its kind was telephone lines, which used circuit switching. Despite the development of a packet switching network at the NPL in England as well as the development of the CYCLADES network, packet switching posed a real risk to the project due to its new technology status. In addition, Roberts also planned on the ARPANET being a distributed network, mirroring the ideas of Paul Baran and Louis Pouzin. The goal was to reduce transmission costs, increase reliability, and create an extendable network.

The BBN IMP team

In order for DARPA to protect their rather expensive computers from necessary hardware alterations or from using the new network, they invented a computer interface between a theoretical end node on the network and the network at large. This became known as an Interface Message Processor, or an IMP. The IMP was dedicated to handling all in bound and out bound network traffic from its node as well as act as a user interface to the mainframe, similar to a router. Also developed at DARPA were the various networking protocols that would be used to control data flow across the networks. This began with the Network Control Protocol or NCP and offered the ability to move packets to a defined destination. This would later be replaced by the more useful Transmission Control Protocol or TCP. TCP integrated the ability to send and receive acknowledgements for packets. In this way, end nodes could guarantee the success of packet transfer as opposed to the network, paralleling the line of thinking that created CYCLADES. TCP would again evolve into an even more comprehensive protocol after DARPA adopted the Open Systems Interconnection reference model into their protocol. The result was the Internet Protocol Suite or TCP/IP.

DARPA and the creators of the internet had finally accomplished what they set out to do, and even more. Without these men and DARPA, it is unimaginable how much longer it would have taken for us to get to the point we are today.

Molders of the Internet

The molders of the Internet were most surprisingly the users. Users, most of which were computer scientists, often turned into the developers and it was generally encouraged. The beginnings of ARPANET was not an easy time, the network was still largely unsupported and had nodes that still couldn't quite figure out what the network was for. There were a multitude of hurdles that stood in the way of users joining ARPANET. For starters, to have an ARPANET node installed at your location the price was anywhere from $55,000 to $107,000. Even then, all the support that was provided came in the form of a manual describing how to configure the network, a task that could keep an expert computer scientist occupied for a year. For the non-tech savvy, it was near impossible to use the network. In response, hosts who needed users attracted them by providing support and easier to understand tutorials.

Many of the users also envisioned that this model of networking would allow for the realization of all of the world's computing being handled by mainframes that people connected to remotely. However, due to reliability and raw speed issues, many users instead resorted to simply copying programs from host computers and running the software on their own systems. The network did provide unusual solutions to certain problems though, such as the transferring of data from computer to computer within a local area network. This was first heavily utilized at MIT where staff and students would use the local network to transfer files instead of walking from computer to computer with their magnetic tape.

ARPANET users banded together forming groups such as USING to demand more functionality and better support from ARPA. This was successful for a period of time until ARPA moderators felt that they were no longer in control of their network. In the light of being seen as a group that was merely bending to the will of a group of users they locked out USING from demanding more user functionality and hampered the efforts of individual users developing their own products to use on the network. BBN was extremely proactive when it came to matters of network maintenance and general improvements upon existing functionality. They were more resistant to demands for new functionality, especially ones that would increase their already large management task list.

The major uses of ARPANET before the introduction of e-mail was for climatology and seismology. An interesting note about the seismology was that one of it's uses was to monitor both Russian and American lands for signs of prohibited nuclear testing. This was also an instance of using satellites to transfer data across the Atlantic Ocean. At the time weather was especially important to monitor when planning military operations. Both of these uses required both fast computing and also the handling of large quantities of data. ARPANET rose to the occasion with both hardware and software to support these military needs.

Usage of the network truly didn't explode until the advent of e-mail. Mail was a concept that was already easily understood by the population at large, which allowed it to be easily accepted by the community. E-mail had major benefits that overcame problems with both conventional mail and the telephone. An e-mail was almost immediate, and it did not require the user to be currently at their station to accept it. Even in the early years users were using e-mail similar to the way in which people text message today, in short frequent messages. E-mail was one of the main features not military related and was also created by the users, not ARPANET. Email showed that the power to change the Internet was truly in the hands of the user. If a user was skilled and determined enough, they had the power to create nearly anything.

Map of the early ARPANET nodes

Xerox PARC


greagfeareagra

 

  • Aucune étiquette