Vous regardez une version antérieure (v. /display/SymCom/Symbiotic+Computing+Narrative) de cette page.

afficher les différences afficher l'historique de la page

« Afficher la version précédente Vous regardez la version actuelle de cette page. (v. 20) afficher la version suivante »

Symbiotic Computing


The history of computers is a well-documented progression from enormous, expensive mainframes made for complex scientific and industrial use into tiny, cheap devices that are in our pockets, in our homes, and affect nearly every facet of our daily lives. However, the story of how computing evolved from the niche interest of scientists and mathematicians into ubiquitous, user-friendly machines involves a much deeper analysis of the events and influences that gave rise to computers existing in a symbiosis with humanity. The transition from room-sized mainframes programmed with punch cards into phones and tablets that are so intuitive toddlers can use them was neither fast nor easy, but a long process of triumphs and failures influenced greatly by the many brilliant people all over the country that came together to create something truly amazing. This underground technological revolution has forever changed the way we live, work, and communicate, and will continue to become more symbiotic with humanity as new technologies and ideas are developed. The purpose of this wiki is to demonstrate and analyze the processes, influences, and engines that drove these ideas into the modern age.

 

 

Project MAC


In the days when every computing resource was prohibitively expensive, batch processing was the dominant form of computing in businesses and universities everywhere. Yet as technology advanced and the limitations of batch processing became more and more apparent, people were looking for better ways to utilize computing. One solution was the then-radical idea of time-sharing, where multiple users could simultaneously use the same mainframe computer. While the new idea could not have been realized at the time with current machines, that did not stop one MIT lab from trying.

A view inside the Project MAC Laboratory

Formation

J.C.R. Lickliter, regarded by many as the visionary of interactive computing, was serving as the director of ARPA's Information Processing Techniques Office (IPTO) from 1962 to 1964. During this time, he was looking for a way to implement the then-radical idea of time-sharing, where multiple users could simultaneously use the same mainframe computer. He found a solution through his former colleague Robert Fano, a professor at MIT. Fano, along with Fernando J. CorbatóRobert Metcalfe, and others, proposed Project MAC (signifying both Multiple Access Computer and Machine Aided Cognition) to Lickliter, a laboratory focused on developing time-sharing and its applications and using that to research other applications of time-shared computers. Lickliter awarded the group with a $2 million grant from ARPA, with Fano serving as the director.

Research

Multics

Fano using a CTSS Terminal

Over the next few years, Project MAC would go on to become famous for its research on operating systems, artificial intelligence, and the theory of computation. Some of its most important developments were in time-sharing, one of the biggest reasons the lab was created. One of the first time-sharing systems, the Compatible Time-Sharing System (CTSS) was developed by Corbató and demonstrated at MIT before the formation of Project MAC, and was used at the lab during its operation and greately influenced the design of subsequent time-sharing systems. Most notably was Multics, the successor to CTSS. Developed at Project Mac, Multics would go on to be the basis of almost every other multiple-access system created. Furthermore, Multics would also inspire the development of Unix in 1969, a fundamental foundation of systems that continue to be used to this day.

Artificial Intelligence

An "AI Group" led by Marvin Minsky was a group of programmers and computer scientists integrated into Project MAC. They were interested in the problems of vision, mechanical motion and manipulation, and language, which they viewed as the keys to more intelligent machines. In addition, they had their own mainframe computer (a PDP-6, followed by a PDP-10) for which they developed their own time-sharing operating system, known as the Incompatible Time-Sharing System, or ITS, as a joke on the name of the CTSS. The AI group disagreed with the direction taken with Multics (particularly with the dicision to include powerful system security), so they developed ITS instead.

An IBM 7094, an example of one of the machines used at Project MAC

Separation of LCS and AI Lab

By the late 1960s, Minksy's AI group wanted more space to conduct their own research, and were unable to get satisfaction from the then project director Lickliter (who left ARPA for a brief stint at IBM before returning to MIT to become the Project MAC director). This, along with no small amount of university politics, led to a separate MIT AI Lab being formed in 1970. Minsky, along with many of his AI group colleagues, left Project MAC to join this new lab. Talented programmers, such as Richard Stallman (who would go on to write EMACS and launch the GNU project) flourished at the AI Lab during this time.

Most of the researchers who did not leave to join the AI Lab also left Project MAC to form the Laboratory for Computer Science and continued their research of operating systems, programming languages, distributed systems, and the theory of computation, and would continue to do so for the next thirty years.

A PDP-6, one of the computers used by the AI Lab

Analysis

Project MAC marked the beginning of a massive shift in the paradigm of what computing meant and how people perceived it. Instead of being incredibly technical, complex, and difficult to use math machines , the scientists at Project MAC began to see that computers could be very versatile, as well as effective tools for communication and collaboration. The key to this, however, was to make computers much more personal and easy to use. These ideas would prove to be the foundation of the computing movement, and these scientists and their ideas pervade the rest of the evolution of computing. However, these concepts would take much more time to be grasped by the general public, and the computing revolution would remain "underground" so to speak, in the realm of these specialists and visionaries until computers started to evolve into an even more personal, symbiotic form.

Arpanet


Origins of the Internet

Early Networking
The desire for a solution to the threat of a nuclear first strike during the cold war prompted the Air Force into building a communications system that would be able to survive attacks and maintain "proper command and control". The responsibility for realizing this system was put on the shoulders of the Air Force's "think tank," the Rand Corporation. Founded in 1946, the Rand Corporation was "an outgrowth of operations research efforts initiated during World War II." It attracted many talented minds and was active in the field of computer science research.
Paul Baran was a young engineer who joined the Rand corporation in 1959 and saw the need for a survivable communications system without an explicit contract from the Air Force. One of his first attempts at this survivable system was the "Go / No Go" system which was intended to provide control with redundant AM radio communication lines, so "...the loss of any single point in the network would not result in a critical failure." When the Air Force was presented with this idea they insisted they needed greater communications capacity.
Over the next three years Baran would formulate a new communications system "...that would combine survivability with high capacity." He planned to achieve this by using a distributed system where every node was capable of switching traffic, rather than having a hierarchical concentrated switching system. There were also as many as eight lines between each node. Baran also implemented cryptography and a message priority system.


Paul Baran
Data moved through this network in a manner called "message switching" or "store-and-forward switching." This meant entire messages, with destinations and sources, were moved through the network from one node to the other (each node receiving, storing and then forwarding the message) until the message reached its destination. Traffic could be mediated by storing messages until a line was less busy, therefore increasing the systems efficiency and reducing the possibility of data loss.
Various message switching systems were already in use by the DoD, but because of low transmission speeds the computers (at switching nodes) tended to be large and complex in order to store all the messages that would be stockpiled waiting to transmit. Baran believed that a faster transmission system would allow for cheaper computers, and therefore more nodes.
In contrast to the distributed system Baran had proposed, AT&T developed a decentralized communications system called AUTOVON. The AUTOVON system was designed by AT&T in 1961 as a simpler means to providing disaster survivability by decentralizing switching stations. Baran's proposed idea was different in the fact that it was distributed instead of decentralized. The AUTOVON system still had individual nodes responsible for serving hundreds of lines. Traffic in this system was also re-routed manually by operators sitting in the switching stations.

Packet Switching
The concept of switching small blocks of data was first explored by Paul Baran in the early 1960s. Independently, Donald Davies at the National Physical Laboratory in the UK had developed the same ideas.
Baran developed the concept of message block switching during his research at the RAND Corporation  for the US Air Force into survivable communications networks, first presented to the Air Force in the summer of 1961 as briefing B-265 then published as RAND Paper P-2626 in 1962. Baran's P-2626 paper described a general architecture for a large-scale, distributed, survivable communications network. The paper focuses on three key ideas: first, use of a decentralized network with multiple paths between any two points; and second, dividing complete user messages into what he called message blocks (later renamed packets); then third, delivery of these messages by store and forward switching.
Adoption of packet switching was slow however, as there is overhead when it comes to packet switching. The adoption was further slowed by the de facto standard means of communication until that point in time which was circuit based. In packet switching the computers at either ends have to go through the process of splitting data into packets and reforming the data from packets. Compounding the problems of overhead is that when the message was split each block has to have its own destination associated with it - instead of having one destination per entire message. Because individual packets could take different routes through the system there was the possibility of them coming in out of order which required more complex algorithms be used in the re-assembling process. The packet switching debate would continue on into the 1960's.

ARPANET
The Advanced Research Projects Agency would be the first to develop a large-scale packet switching network. The ARPANET project was headed by Lawrence Roberts, a former computer scientist at MIT who had been working on networking. After meeting with Roger Scantlebury at a computing symposium in Tennessee in 1967, Roberts was convinced to increase the bandwidth of the packet switching system from 9.6 to 56 kilobits per second . The additional bandwidth allowed for large amounts of traffic and reduced the need for complex expensive hardware at each node due to the decreased need to store packets before forwarding them. Through various encounters the ARPANET group were exposed to the ideas and techniques of Baran and Davies and they became convinced that packet switching was the way to go for ARPA.
In 1975 the APRANET was declared operational and control of it was taken by the Defense Communications Agency freeing ARPA funds to pursue other research projects. The network grew to over 200 hosts by 1981, with a new host being added every 12 days. As the network grew and became increasingly public (as well as connected to machines outside of the United States) the US Military decided to split the network, separating military machines from the research machines and placing them in their own MILNET. The MILNET could still communicate with the APRANET via email routed through gateways.


Log of first message sent via ARPANET - 10/29/1969
These early steps in networking laid the foundation for what would eventually become the internet. The concepts of distributed networking and packet switching as a means of large scale communication would become integral parts of the ARPANET and later the Internet. The internet has its roots in the basic foundations that were laid down here but grew to become something much larger and more complex than any of the early computer scientists who contributed to it could have predicted.

Creators of the Internet


The people who molded and created the framework for what we now call the Internet are Licklider, Paul Baran, Donald Davies, Louis Pouzin, Robert Taylor and Lawrence Roberts. Each one contributed important concepts that were necessary for nodes to connect. These main concepts were packet switching, TCP, TCP/IP, and IMP.

Before personal computers, timesharing was the main method of computing and shared many of the original concepts of networking. You had a mainframe computer, or server, and multiple users would connect via a remote terminal, or web browser, and make use of a slice of the processor, or bandwidth. Originally, only very few people were allowed to use time on the mainframe and you had to have a very good reason to use the system. Joseph C. R. Licklider was the first director of the Information Processing Techniques Office at DARPA and was a major proponent of seamless integration of computing into everyday life and that everyone should be able to make use of computers. Licklider put forth funding towards this directive and inspired others to continue his work.

Paul Baran was a researcher at RAND who was interested in US Military communications in the event of a nuclear strike. At that time, the longest form of communication was the long radio wave. If a nuclear strike were to occur, then this form of communication would have been rendered useless, because in order to traverse the earth, radio waves must bounce of its surface as well as the ionosphere. A nuclear strike would have altered the ionosphere temporarily in a way that would adversely affect radio transmissions. In order to overcome this limitation, Baran foresaw a distributed network where each node in the network would be capable of forwarding information to the next node in a chain to its destination, as well as dynamically work around nodes that went inoperable over the course of use. The network would be able to make use of directed signals as opposed to radio waves, because the concept of a distributed network negated the short range limitations of the medium. Baran even went so far as to envision a form of packet switching (he called it message blocks) that is one of the basic elements of today's Internet. Donald Davies at NPL (National Physical Laboratory in England) also came up with packet switching and is generally credited with the invention even though they both came up with the same concept at nearly the same time. NPL was able to implement a working network based on Davies’s idea and was an early proof of concept.

Donald Davies, center

France also had its own research center developing a computer network and was ran by Louis Pouzin. His efforts produced the CYCLADES network which brought a number of new innovations to the networking concept that would later be adopted by the Internet. First, the CYCLADES network was divided into layers like an IP stack that unwinds the series of headers that surrounds data sent over a network today. Pouzin was able to simplify the design of his network layers by removing the need for reliable transmission between nodes. Instead of requiring the network to be reliable, he shifted that responsibility to the nodes themselves. If nodes wanted reliable transmissions, they would be required to detect missing packets and request a retransmission. These forms of transmissions without built in network reliability are the early forms of the UDP and TCP network protocols. CYCLADES emphasized the ability for networks to communicate with other networks, each acting as a subnet to the whole. This way CYCLADES could be expanded from its then small number of nodes. CYCLADES also solved the issue with store and forward methods of the non-end nodes in a network being expensive and invasive. They provided a direct hardware implementation for packet relay, which cut down on the time data would spend at a non-end node in transit as well as removed the need for each non-end node computer to delve into the contents of the packet to simply forward it. These same ideas are found in the design of the internet, which allow for dynamic expansion and subnets that are owned, operated, and controlled by private individuals while maintaining relatively fast speeds and privacy. In fact, it was from this inter-network communication that the term Internet first arose.

The International Organization for Standardization published its own model of a layered network similar to CYCLADES, but created for more general use. From this effort, the Open Systems Interconnection model was developed. The model was extremely comprehensive,  separating the transmission of data into seven layers, the Physical Layer, the Data Link Layer, the Network Layer, the Transport Layer, the Session Layer, the Presentation Layer, and the Application Layer. The OSI Model is the precursor to the TCP/IP Protocol stack.

The Defense Advanced Research Projects Agency had in the meantime been attempting to develop a network of its own, and in the process developed its own set of goals and innovations. Afterwards, the researchers at DARPA would also adopt the ideas and advancements of the other prototypical networks worldwide, and create the precursor that would grow into the internet itself, the ARPANET.

The two most influential men at DARPA were Robert Taylor and Lawrence Roberts. Taylor wanted all the various research groups around the world to work together so information would be shared, redundancy reduced, and costs reduced. Lawrence Roberts was responsible for a number of design decisions that would lead to the robust qualities of the ARPANET. Roberts made the decision to use packet switching as opposed to circuit switching in the ARPANET, despite opposition. The only other well proven US information network of its kind was telephone lines, which used circuit switching. Despite the development of a packet switching network at the NPL in England as well as the development of the CYCLADES network, packet switching posed a real risk to the project due to its new technology status. In addition, Roberts also planned on the ARPANET being a distributed network, mirroring the ideas of Paul Baran and Louis Pouzin. The goal was to reduce transmission costs, increase reliability, and create an extendable network.

The BBN IMP team

In order for DARPA to protect their rather expensive computers from necessary hardware alterations or from using the new network, they invented a computer interface between a theoretical end node on the network and the network at large. This became known as an Interface Message Processor, or an IMP. The IMP was dedicated to handling all in bound and out bound network traffic from its node as well as act as a user interface to the mainframe, similar to a router. Also developed at DARPA were the various networking protocols that would be used to control data flow across the networks. This began with the Network Control Protocol or NCP and offered the ability to move packets to a defined destination. This would later be replaced by the more useful Transmission Control Protocol or TCP. TCP integrated the ability to send and receive acknowledgements for packets. In this way, end nodes could guarantee the success of packet transfer as opposed to the network, paralleling the line of thinking that created CYCLADES. TCP would again evolve into an even more comprehensive protocol after DARPA adopted the Open Systems Interconnection reference model into their protocol. The result was the Internet Protocol Suite or TCP/IP.

DARPA and the creators of the internet had finally accomplished what they set out to do, and even more. Without these men and DARPA, it is unimaginable how much longer it would have taken for us to get to the point we are today.

Molders of the Internet

The molders of the Internet were most surprisingly the users. Users, most of which were computer scientists, often turned into the developers and it was generally encouraged. The beginnings of ARPANET was not an easy time, the network was still largely unsupported and had nodes that still couldn't quite figure out what the network was for. There were a multitude of hurdles that stood in the way of users joining ARPANET. For starters, to have an ARPANET node installed at your location the price was anywhere from $55,000 to $107,000. Even then, all the support that was provided came in the form of a manual describing how to configure the network, a task that could keep an expert computer scientist occupied for a year. For the non-tech savvy, it was near impossible to use the network. In response, hosts who needed users attracted them by providing support and easier to understand tutorials.

Many of the users also envisioned that this model of networking would allow for the realization of all of the world's computing being handled by mainframes that people connected to remotely. However, due to reliability and raw speed issues, many users instead resorted to simply copying programs from host computers and running the software on their own systems. The network did provide unusual solutions to certain problems though, such as the transferring of data from computer to computer within a local area network. This was first heavily utilized at MIT where staff and students would use the local network to transfer files instead of walking from computer to computer with their magnetic tape.

ARPANET users banded together forming groups such as USING to demand more functionality and better support from ARPA. This was successful for a period of time until ARPA moderators felt that they were no longer in control of their network. In the light of being seen as a group that was merely bending to the will of a group of users they locked out USING from demanding more user functionality and hampered the efforts of individual users developing their own products to use on the network. BBN was extremely proactive when it came to matters of network maintenance and general improvements upon existing functionality. They were more resistant to demands for new functionality, especially ones that would increase their already large management task list.

The major uses of ARPANET before the introduction of e-mail was for climatology and seismology. An interesting note about the seismology was that one of it's uses was to monitor both Russian and American lands for signs of prohibited nuclear testing. This was also an instance of using satellites to transfer data across the Atlantic Ocean. At the time weather was especially important to monitor when planning military operations. Both of these uses required both fast computing and also the handling of large quantities of data. ARPANET rose to the occasion with both hardware and software to support these military needs.

Usage of the network truly didn't explode until the advent of e-mail. Mail was a concept that was already easily understood by the population at large, which allowed it to be easily accepted by the community. E-mail had major benefits that overcame problems with both conventional mail and the telephone. An e-mail was almost immediate, and it did not require the user to be currently at their station to accept it. Even in the early years users were using e-mail similar to the way in which people text message today, in short frequent messages. E-mail was one of the main features not military related and was also created by the users, not ARPANET. Email showed that the power to change the Internet was truly in the hands of the user. If a user was skilled and determined enough, they had the power to create nearly anything.

Map of the early ARPANET nodes

Xerox PARC


If any person was asked to name the biggest, most prestigious companies associated with the computer market, most would undoubtedly answer along the lines of Microsoft, Apple, Hewlett-Packard, and perhaps IBM. But long before any of those companies even existed or were involved in personal computing, a company whose name is still synonymous with only copiers pioneered most of the technologies and ideas that make computing as pervasive as it is today. Xerox grew explosively throughout the 1960s, having a near-total monopoly on the copier industry. With billions in new capital and rising competition on the horizon, Xerox CEO Peter McColough was looking to broaden the company's influence and saw potential in the then-obscure field of computing. Most of his other executives thought he was crazy and risking the company's future on a technology that had reached its peak. Furthermore, IBM was then to computing what Xerox was to copying, holding a massive share of the market and making it nearly impossible to compete. Yet McColough was a visionary, and already having a strong Xerox presence in offices across the nation, he wanted the company to own "the office of the future" and set in motion a plan to make it a reality.

A view of PARC

Founding of PARC

Recruitment

Setting his plan in motion in 1969, McColough delegated the responsibility of setting up a new computing research facility to the company's chief scientist, Jack Goldman. While Goldman was an accomplished scientist and lab manager, having been a faculty member of Carnegie Tech (later Carnegie Mellon) and the director of the Ford Scientific Laboratory, he knew nothing about computers. Goldman approached another physicist and former colleague, George Pake, looking for a manager for the new research center as well as help recruiting staff and finding a location. While Pake did not have very much computer experience either, Goldman knew he would be an exceptional manager for the new lab. Pake believed deeply that the key to a successful laboratory is the wise selection of its people. To manage the laboratory he said,

"Little success is likely to come from showing researchers to a laboratory, describing in detail a desired technology or process not now existent, and commanding: 'Invent!' The enterprise will go much better if some overall goals or needs are generally described and understood and if proposals for research are solicited from creative professionals. Managing the research then consists of adjusting the budgets for the program to give selective encouragement.”

The first and most important of these wise selections was the recruitment of Robert Taylor. Having much-needed computer experience to help manage the facility and help recruit scientists to join the research center, Pake knew Taylor from his time as provost at Washington University in St. Louis. Taylor's time at ARPA gave him one of the best networks for computing personnel in the country, and knew just about everyone that would make a good addition to PARC's research staff. Additionally, Taylor shared the philosophies of visionaries like J.C.R. Licklider, and knew that maintaining the spirit and atmosphere that of the lab to promote those ideals would enable it to flourish. With Taylor's help, Pake and Goldman recruited the best of the best minds from universities and companies across the nation, assembling a team that could (and would) produce something truly incredible.

Robert Taylor

Location, Location, Location

After traveling all over the country, searching for an exceptional location for this exceptional new research center, Pake and Goldman selected Palo Alto, California as the home of the research center.They wanted a place where their scientists could work and innovate freely, away from corporate bureaucracy and oversight. At almost 3,000 miles away from Xerox's corporate headquarters and existing research lab in Rochester, NY, Pake and Goldman felt that the distance would be beneficial to both their staff and the always budget-focused bureaucrats at Xerox.

Nestled right in the middle of the quickly-growing Silicon Valley, making the headquarters for the research center in Plao Alto enabled its staff to be at the heart of the burgeoning technology community. Additionally, the location made it very easy to attract even more brilliant people to work at the research lab, especially once it began to make a name for itself as a hotbed of innovation among the community. Taylor's hiring process for new employees, while unorthodox, contributed to the atmosphere he maintained at the lab. Each prospective hire would meet and be interviewed by everyone at the lab, and the group then reached a consensus together on whether the candidate measured up to their standards or not. This led to a very community-focused culture at PARC, and is a model that enabled the close-knit group of scientists foster a stronger bond with each other and collaborate more effectively.

A look inside PARC

Research at PARC

The PARC researches were known to be tinkerers and hackers; they liked to make things.This embodied the approach to research at PARC, where employees were given near-total freedom to work on whatever they deemed worthy of their time and effort.

In 1972, Chuck Thacker and Butler Lampson invited Alan Kay to join their project to build a small personal computer. The machine would be known as the Alto, and have a keyboard, screen, and processor in portable, suitcase-sized package (it would later have a mouse and GUI interface). The idea was that processors would be cheap enough in 5-10 years for every person to have their own “personal computer” instead of sharing time on an office computer.

Thacker began design work on the Alto. The original plan was to make 30 units for the PARC computer science lab. The screen would be 8.5x11” to mimic paper and the projected cost was $10,500 per machine. In the end, Xerox made 2,000 Altos at a cost of about $18,000 per machine, which fell to $12,000 after a high-volume program was put in place. There were some technical innovations like micro-parallel processing (to shift the memory access problem to the microprocessor) and a new high-performance display that used less memory (and so allowed the user to actually run apps).

Meanwhile, in June 1972, Bob Metcalfe encountered a technical paper by Norman Abramson describing Hawaii’s ALOHAnet, a radio network. Metcalfe would use several principles in that paper while designing the first Ethernet, a computer networking technology for local area networks (it’s how most office Internet networks are connected even in 2010).  A month later, Metcalfe wrote a patent memo describing his networking system, using the term “Ethernet” for the first time. Metcalfe had come from Harvard after they rejected his doctoral thesis on how networks transmit data in digital packets because it was “insufficently theoretical.”  He would later use the concepts in that thesis to build a multi-billion dollar company and transform the networking industry (he also resubmitted his thesis with more math, and it was accepted). Metcalfe had a huge advantage over many researchers because he was the Arpanet liaison or “facilitator” at MIT in 1971, and so saw the early networking technical issues and he had valuable personal connections with people on Arpanet. Taylor had set specs for a local area network linking the Altos whose cost was no more than 5% of the computers it connected and was simple, with no complex hardware, and that was reliable and easily expandable (didn’t want to splice cable all the time). Metcalfe used Abramson’s paper, adapting it for Altos and building in redundancy (a string of verification bits known as “checksum”) and an algorithm to deal with interference. It would also require a physical line and Metcalfe called it the Ethernet.

 

The Alto is the First Personal Computer (PC)

In April 1973, the first Alto became operational, displaying an animated image of Sesame Street’s Cookie Monster. The Alto was described in a memo in 1972 by Butler Lampson (himself inspired by the “Mother of All Demos” of Doug Engelbart); Chuck Thacker was the main designer of the Alto. Lampson’s memo had proposed a system of interacting workstations, files, printers, and devices linked via one co-axial cable within a local area network, whose members could join or leave the network without disrupting the traffic.

The Alto was revolutionary because it was a personal workstation for one, not a room-sized, time-sharing computer for many, meant to sit on a single desktop. It is credited as being the first “personal computer” (PC) in a world of mainframes (note that some would argue for other PCs being first, like the Olivetti P101). The Alto had a bit-mapped display, a graphical user interface (GUI) with windows and icons, and a “what you see is what you get” (WYSIWYG or “wizzy-wig”) editor. It also had file storage, a mouse, and software to create documents, send e-mails, and edit basic bitmap pictures. Also in April 1973, Dick Shoup’s “Superpaint” frame buffer recorded and storeed its first video image, showing Shoup holding a sign reading, “It works, sort of.”  It was the first workable paint program.

The Alto got better as PARCs programmers built apps for it. In fall 1974 Dan Ingalls invented “BitBlt,” a display algorithm that later made possible the development of key features of the modern computer interface (overlapping screen windows, icons, and pop-up menus which could manipulated with a mouse click). This was the desktop metaphor used by 99% of personal computer around the world even in 2010. At the same time, Charles Simonyi, Tim Mott, and Larry Tesler began work on two programs which would become the world’s first user-friendly computer word processing system.

The Alto, BitBlt, and Bravo basically created the modern industry of desktop publishing, used by office workers around the world. Ordinary people at home or work could turn out professional quality newsletters, magazines, books, quarterly letters, and so on faster and more easily.

Bravo, the word processor, has a fascinating story. Charles Simonyi, an Hungarian computer science student who defected for the US, was a key actor. His defection, as a side note, caused the Hungarian government to fire his father from a teaching job at a Budapest engineering institute, showing how the vaunted “Soviet science” system devoured its best talent for idiotic political reasons. Simonyi built on Burt Lampson’s ideas for holding an entire document in memory using “piece tables” to create an app called Bravo. It was the first “what you see is what you get” WYSIWYG word processor on a computer at a reasonable speed – a useful application. People started coming to PARC to use it for personal things like PTA reports, letters to professional bodies, resumes, and so on. Their friends writing PhD theses wanted to use it. Larry Tesler and Tim Mott improved the Bravo user interface to create something similar to the menu-based interface people use in MS Word in 2005. It had features like “cut,” “paste,” and so on, after watching how non-engineers actually interacted with the interface.

The Alto

Collapse

PARC clearly created a host of revolutionary, innovative technologies, making them poised to dominate the future of personal computing. But why didn't they? They had all the tools to succeed in place, and the corporate infrastructure to support such an obvious opportunity. Unfortunately for them, that corporate infrastructure would be their undoing. Despite the clear opportunities developed at PARC, the company’s decision-making on dozens of occasions was not about new technologies and opportunities, but about personalities, politics, and short-term incentives. Many of the company’s prominent managers saw it as a copier company, not as a computer or a publishing company, let alone an enabler of McColough's “office of the future.”  The managers were fixated on their previously successful leased copier business model, and the sales force was trained on copiers and typewriters, not new office technology. Also the purchasing managers for computers were professional IT people, not the managers who ordered copiers.

Perhaps most importantly, Xerox wouldn’t allow entrepreneurial scientists the freedom to do what they wanted and avoid the corporate bureaucracy. New ventures had to be led by people running established divisions, people who hated risk-taking. So Xerox lost its exceptionally talented people, many of whom left for startups that became billion-dollar companies much bigger than Xerox. Along with the people, many of their ideas and technologies went with them, leaking into these companies that could properly monetize them. This was a fault with PARC itself, which often acted as a pure research center. The scientists were generally far away from customer development, sales, or entrepreneurial development. The few Xerox executives (not PARC researches) who tried to commercialize products were crushed by the corporate bureaucracy.

As Steve Jobs said in a speech in 1996:  “Xerox could have owned the entire computer industry… could have been the IBM of the nineties… could have been the Microsoft of the nineties.” So while PARC was a success at innovation, it was a failure at commercialization.

Analysis

Where Project MAC was the beginning of a shift to symbiotic computing, Xerox PARC was the culmination of it. The technologies developed at PARC such as GUIs, the mouse, bitmapped displays, and more continue to be seen in nearly every aspect of modern personal computing, and have since been so integrated into computing technology it is nearly impossible to imagine a world without them. The unique nature of PARC's staff and organization enabled them to revolutionize the wold and realize J.C.R. Licklider's dream of truly personal computing.

Conclusions

Over time, computing has evolved from an obscure technology confined to the realms of government agencies and university labs into a widespread phenomenon affecting nearly every aspect of modern life. It is abundantly clear that the people involved were one of the most important parts of the evolution of computing. Throughout these events, the same people appear and reappear, sometimes in different roles and locations, but this much is clear: their ideas and their relationships with one another far transcended what any of them could have done individually. These people, (directly or indirectly) inspired by Licklider’s idea of man-computer symbiosis, made interactive computing possible. Instead of a “perfect storm” of events that culminated in a catastrophic failure, the development of interactive computing was a “perfect sunny day” of the right people in the right places with the right funding and a drive to create something that changed humanity as we know it.

   The word “evolution” is the most appropriate description for the history of the computer for several reasons. Since the beginning of human history, homo sapiens have used tools to help them survive and prosper. As humans have evolved, our technologies and tools have evolved alongside us, continuing to be an integral part of life as we know it today. Some may argue that technology is simply a by-product of our evolutionary progress, but it is much more; it it is a part of human evolution, and we as a species are beginning an age where computers and humans exist in a true symbiosis between man and machine. It would be difficult to fine someone who can now imagine a world without computers, and that bond is only going to get stronger.

  • Aucune étiquette