'arpanet' Episodes

The Intergalactic Memo That Was The Seed Of The Internet

     10/12/2020

JCR Licklider sent a memo called "Memorandum For Members and Affiliates of the Intergalactic Computer Network" in 1963 that is quite possibly the original spark that lit the bonfire called The ARPANet, that was the nascent beginnings of what we now called the Internet. In the memo, “Lick” as his friends called him, documented early issues in building out a time-sharing network of computers available to research scientists of the early 60s. 

The memo is a bit long so I’ll include quotes followed by explanations or I guess you might call them interpretations. Let’s start with the second paragraph:

The need for the meeting and the purpose of the meeting are things that I feel intuitively, not things that I perceive in clear structure. I am afraid that that fact will be too evident in the following paragraphs. Nevertheless, I shall try to set forth some background material and some thoughts about possible interactions among the various activities in the overall enterprise for which, as you may have detected in the above subject, I am at a loss for a name.

Intuition, to me, is important. Lick had attended conferences on cybernetics and artificial intelligence going back to the 40s. He had been MIT faculty and was working for a new defense research organization. He was a visionary. The thing is, let’s call his vision a hypothesis. During the 1960s, the Soviets would attempt to build multiple networks similar to ARPANet. Thing is, much like a modern product manager, he chunked the work to be done up and had various small teams tackle parts of projects, each building a part but in the whole proving the theory in a decentralized way. As compared to Soviet projects that went all-in.

A couple of paragraphs later, Lick goes on to state:

In pursuing the individual objectives, various members of the group will be preparing executive the monitoring routines, languages amd [sic.] compilers, debugging systems and documentation schemes, and substantive computer programs of more or less general usefulness. One of the purposes of the meeting–perhaps the main purpose–is to explore the possibilities for mutual advantage in these activities–to determine who is dependent upon whom for what and who may achieve a bonus benefit from which activities of what other members of the group. It will be necessary to take into account the costs as well as the values, of course. Nevertheless, it seems to me that it is much more likely to be advantageous than disadvantageous for each to see the others’ tentative plans before the plans are entirely crystalized. I do not mean to argue that everyone should abide by some rigid system of rules and constraints that might maximize, for example, program interchangeability.

Here, he’s acknowledging that stakeholders have different needs, goals and values, but stating that if everyone shared plans the outcome could be greater across the board. He goes on to further state that:

But, I do think that we should see the main parts of the several projected efforts, all on one blackboard, so that it will be more evident than it would otherwise be, where network-wide conventions would be helpful and where individual concessions to group advantage would be most important.

These days we prefer a whiteboard or maybe even a Miro board. But this act of visualization would let research from disparate fields, like Paul Baran at RAND working on packet switching at the time, be pulled in to think about how networks would look and work. While the government was providing money to different institutes the research organizations were autonomous and by having each node able to operate on their own rather than employ a centralized approach, the network could be built such that signals could travel along multiple paths in case one path broke down, thus getting at the heart of the matter - having a network that could survive a nuclear attach provided some link or links survived. 

He then goes on to state:

It is difficult to determine, of course, what constitutes “group advantage.” Even at the risk of confusing my own individual objectives (or ARPA’s) with those of the “group,” however, let me try to set forth some of the things that might be, in some sense, group or system or network desiderata.

This is important. In this paragraph he acknowledges his own motive, but sets up a value proposition for the readers. He then goes on to lay out a future that includes an organization like what we now use the IETF for in:

There will be programming languages, debugging languages, time-sharing system control languages, computer-network languages, data-base (or file-storage-and-retrieval languages), and perhaps other languages as well. It may or may not be a good idea to oppose or to constrain lightly the proliferation of such. However, there seems to me to be little question that it is desireable to foster “transfer of training” among these languages. One way in which transfer can be facilitated is to follow group consensus in the making of the arbitrary and nearly-arbitrary decisions that arise in the design and implementation of languages. There would be little point, for example, in having a diversity of symbols, one for each individual or one for each center, to designate “contents of” or “type the contents of.” 

The IETF and IEEE now manage the specifications that lay out the structure that controls protocols and hardware respectively. The early decisions made were for a small collection of nodes on the ARPANet and as the nodes grew and the industry matured, protocols began to be defined very specifically, such as DNS, covered in the what, second episode of this podcast. It’s important that Lick didn’t yet know what we didn’t know, but he knew that if things worked out that these governing bodies would need to emerge in order to keep splinter nets at a minimum. At the time though, they weren’t thinking much of network protocols. They were speaking of languages, but he then goes on to lay out a network-control language, which would emerge as protocols.

Is the network control language the same thing as the time-sharing control language? (If so, the implication is that there is a common time-sharing control language.) Is the network control language different from the time-sharing control language, and is the network-control language common to the several netted facilities? Is there no such thing as a network-control language? (Does one, for example, simply control his own computer in such a way as to connect it into whatever part of the already-operating net he likes, and then shift over to an appropriate mode?)

In the next few paragraphs he lays out a number of tasks that he’d like to accomplish - or at least that he can imagine others would like to accomplish, such as writing programs to run on computers, access files over the net, or read in teletypes remotely. And he lays out storing photographs on the internet and running applications remotely, much the way we do with microservices today. He referrs to information retrieval, searching for files based on metadata, natural language processing, accessing research from others, and bringing programs into a system from a remote repository, much as we do with cpan, python imports, and github today. 

Later, he looks at how permissions will be important on this new network:

here is the problem of protecting and updating public files. I do not want to use material from a file that is in the process of being changed by someone else. There may be, in our mutual activities, something approximately analogous to military security classification. If so, how will we handle it?

It turns out that the first security issues were because of eased restrictions on resources. Whether that was viruses, spam, or just accessing protected data. Keep in mind, the original network was to facilitate research during the cold war. Can’t just have commies accessing raw military research can we? As we near the end of the memo, he says:

The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence.

Again, it was meant to be a military network. It was meant to be resilient and withstand a nuclear attack. That had already been discussed in meetings before this memo. Here, he’s shooting questions to stakeholders. But consider the name of the memo, Memorandum For Members and Affiliates of the Intergalactic Computer Network. Not “A” network but “the” network. And not just any network, but THE Intergalactic Network. Sputnik had been launched in 1957. The next year we got NASA. 

Eisenhower then began the process that resulted in the creation of ARPA to do basic research so the US could leapfrog the Soviets. The Soviets had beaten the US to a satellite by using military rocketry to get to space. The US chose to use civilian rocketry and so set a standard that space (other than the ICBMs) would be outside the cold war. Well, ish. 

But here, we were mixing military and civilian research in the hallowed halls of universities. We were taking the best and brightest and putting them into the employ of the military without putting them under the control of the military. A relationship that worked well until the Mansfield Amendment to the 1970 Military Authorization Act ended the military funding of research that didn’t have a direct or apparent relationship to specific military function. What happened between when Lick started handing out grants to people he trusted and that act would change the course of the world and allow the US to do what the Soviets and other countries had been tinkering with, effectively develop a nationwide link of computers to provided for one of the biggest eras of collaborative research the world has ever seen. What the world wanted was an end to violence in Vietnam. What they got was a transfer of technology from the military industrial complex to corporate research centers like Xerox PARC, Digital Equipment Corporation, and others. 

Lick then goes on to wrap the memo up:

In conclusion, then, let me say again that I have the feeling we should discuss together at some length questions and problems in the set to which I have tried to point in the foregoing discussion. Perhaps I have not pointed to all the problems. Hopefully, the discussion may be a little less rambling than this effort that I am now completing.

The researchers would continue to meet. They would bring the first node of the ARPANET online in 1969. In that time they’d also help fund research such as the NLS, or oN-Line System. That eventually resulted in mainstreaming the graphical user interface and the mouse. Lick would found the Information Processing Techniques office and launch Project MAC, the first big, serious research into personal computing. They’d fund Transit, an important navigation system that ran until 1996 when it was replaced by GPS. They built Shakey the robot. And yes, they did a lot of basic military research as well. 

And today, modern networks are Intergalactic. A bunch of nerds did their time planning and designing and took UCLA online then Stanford, then UCSB and then a PDP10 at the University of Utah. Four nodes, four types of computers. Four operating systems. Leonard Kleinrock and the next generation would then take the torch and bring us into the modern era. But that story is another episode. Or a lot of other episodes. 

We don’t have a true Cold War today. We do have some pretty intense rhetoric. And we have a global pandemic. Kinda’ makes you wonder what basic research is being funded today and how that will shape the world in the next 57 years, the way this memo has shaped the world. Or given that there were programs in the Soviet Union and other countries to do something similar was it really a matter of technological determinism? Not to take anything away from the hard work put in at ARPA and abroad. But for me at least, the jury is still out on that. But I don’t have any doubt that the next wave of changes will be even more impactful. Crazy to think, right?


Spam, Email, and Best Intentions

     10/4/2020

Spam emails are a fact of modern life. Who hasn't been sent annoying and sometimes cryptic messages from unidentified addresses? To understand where spam comes from we need to look at the origins of email itself. Email has had a long and strange history, so too have some of it's most dubious uses.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Networking for a Nuclear War, the Americans

     8/11/2019

In this episode we are going to explore the ARPANET. This is a companion to the last episode, which covered contemporary Soviet attempts to create an early internet.

Like with last time, today we are still in the Cold War era. Now, this won't be a point by point comparison of Soviet to US networks. They are totally different beasts. Instead, what I want to do is look at how ARPANET was developed, what influenced it, and how it would kick start the creation of the internet.


How Not To Network A Nation: The Russian Internet That Wasn't

     11/2/2020

I just finished reading a book by Ben Peters called How Not To Network A Nation: The Uneasy History of the Soviet Internet. The book is an amazing deep dive into the Soviet attempts to build a national information network primarily in the 60s. The book covers a lot of ground and has a lot of characters, although the most recurring is Viktor Glushkov, and if the protagonist isn’t the Russian scientific establishment, perhaps it is Viktor Glushkov. And if there’s a primary theme, it’s looking at why the Soviets were unable to build a data network that covered the Soviet Union, allowing the country to leverage computing at a micro and a macro scale 

The final chapter of the book is one of the best summaries and most insightful I’ve ever read on the history of computers. While he doesn’t directly connect the command and control heterarchy of the former Soviet Union to how many modern companies are run, he does identify a number of ways that the Russian scientists were almost more democratic, or at least in their zeal for a technocratic economy, than the US Military-Industrial-University complex of the 60s.  

The Sources and Bibliography is simply amazing. I wish I had time to read and listen and digest all of the information that went into the making if this amazing book. And the way he cites notes that build to conclusions. Just wow.

In a previous episode, we covered the memo, “Memorandum for Members and Affiliates of the Intergalactic Computer Network” - sent by JCR Licklider in 1963. This was where the US Advanced Research Projects Agency instigated a nationwide network for research. That network, called ARPAnet, would go online in 1969, and the findings would evolve and change hands when privatized into what we now call the Internet. We also covered the emergence of Cybernetics, which Norbert Wiener defined in 1948 as a the systems-based science of communication and automatic control systems - and we covered the other individuals influential in its development. 

It’s easy to draw a straight line between that line of thinking and the evolution that led to the ARPAnet. In his book, Peters shows how Glushkov uncovered cybernetics and came to the same conclusion that Licklider had, that the USSR needed a network that would link the nation. He was a communist and so the network would help automate the command economy of the growing Russian empire, an empire that would need more people managing it than there were people in Russia, if the bureaucracy continued to grow at a pace that was required to do the manual computing to get resources to factories and good to people. He had this epiphany after reading Wiener’s book on cybernetics - which had been hidden away from the Russian people as American propaganda. 

Glushkov’s contemporary, Anatoly Kitov had come to the same realization back in 1959. By 1958 the US had developed the Semi-Automatic Ground Environment, or SAGE. The last of that equipment went offline in 1984. The environment was a system of networked radar equipment that could be used as eyes in the sky to detect a Soviet attack. It was crazy to think about that a few years ago, but think today about a radar system capable of detecting influence in elections and maybe notsomuch any more. SAGE linked computers built by IBM. 

The Russians saw defense as cost prohibitive. Yet at Stalin’s orders they began to develop a network of radar sites in a network of sorts around Moscow in the early 50s, extending to Leningrad. They developed the BESM-1 mainframe in 1952 to 1953 and while Stalin was against computing and western cybernetic doctrine outside of the military, as in America, they were certainly linking sites to launch missiles. Lev Korolyov worked on BESM and then led the team to build the ballistic missile defense system. 

So it should come as no surprise that after a few years Soviet scientists like Glushkov and Kitov would look to apply military computing know-how to fields like running the economics of the country. 

Kitov had seen technology patterns before they came. He studied nuclear physics before World War II, then rocketry after the war, and he then went to the Ministry of Defence at Bureau No 245 to study computing. This is where he came in contact with Wiener’s book on Cybernetics in 1951, which had been banned in Russia at the time. Kitov would work on ballistic missiles and his reputation in the computing field would grow over the years. Kitov would end up with hundreds of computing engineers under his leadership, rising to the rank of Colonel in the military. 

By 1954 Kitov was tasked with creating the first computing center for the Ministry of Defence. They would take on the computing tasks for the military. He would oversee the development of the M-100 computer and the transition into transistorized computers. By 1956 he would write a book called “Electronic Digital Computers” and over time, his views on computers grew to include solving problems that went far beyond science and the military. Running company

Kitov came up with the Economic Automated Management System in 1959. This was denied because the military didn’t want to share their technology. Khrushchev sent Brezhnev, who was running the space program and an expert in all things tech, to meet with Kitov. Kitov was suggesting they use this powerful network of computer centers to run the economy when the Soviets were at peace and the military when they were at war. 

Kitov would ultimately realize that the communist party did not want to automate the economy. But his “Red Book” project would ultimately fizzle into one of reporting rather than command and control over the years. 

The easy answer as to why would be that Stalin had considered computers the tool of imperialists and that feeling continued with some in the communist party. The issues are much deeper than that though and go to the heart of communism. You see, while we want to think that communism is about the good of all, it is irrational to think that people will act ways in their own self-interest. Microeconomics and macroeconomics. And automating command certainly seems to reduce the power of those in power who see that command taken over by a machine. And so Kitov was expelled from the communist party and could no longer hold a command. 

Glushkov then came along recommending the National Automated System for Computation and Information Processing, or OGAS for short, in 1962. He had worked on computers in Kyiv and then moved to become the Director of the Computer Center in Ukraine at the Academy of Science. Being even more bullish on the rise of computing, Glushkov went further even added an electronic payment system on top of controlling a centrally planned economy. Computers were on the rise in various computer centers and other locations and it just made sense to connect them. And they did at small scales. 

As was done at MIT, Glushkov built a walled garden of researchers in his own secluded nerd-heaven. He too made a grand proposal. He too saw the command economy of the USSR as one that could be automated with a computer, much as many companies around the world were employing ERP solutions in the coming decades. 

The Glushkov proposal continued all the way to the top. They were able to show substantial return on investment yet the proposal to build OGAS was ultimately shot down in 1970 after years of development. While the Soviets were attempting to react to the development of the ARPAnet, they couldn’t get past infighting. The finance minister opposed it and flatly refused. There were concerns about which ministry the system would belong to and basically political infighting much as I’ve seen at many of the top companies in the world (and increasingly in the US government). 

A major thesis of the book is that the Soviet entrepreneurs trying to build the network acted more like capitalists than communists and Americans building our early networks acted more like socialists than capitalists. This isn’t about individual financial gains though. Glushkov and Kitov in fact saw how computing could automate the economy to benefit everyone. But a point that Peters makes in the book is centered around informal financial networks. Peters points out that Blat, the informal trading of favors that we might call a black market or corruption, was common place. An example he uses in the book is that if a factory performs at 101% of expected production the manager can just slide under the radar. But if they perform at 120% then those gains will be expected permanently and if they ever dip below the expected productivity, they might meet a poor fate. Thus Blat provides a way to trade goods informally and keep the status quo. A computer doing daily reports would make this kind of flying under the radar of Gosplan, or the Soviet State Planning Committee difficult. Thus factory bosses would likely inaccurately enter information into computers and further the Tolchachs, or pushers, of Blat. 

A couple of points I’d love to add onto those Peters made, which wouldn’t be obvious without that amazing last paragraph in the book. The first is that I’ve never read Bush, Licklider, or any of the early pioneers claim computers should run a macroeconomy. The closest thing that could run a capitalist economy. And the New York Stock Exchange would begin the process of going digital in 1966 when the Dow was at 990. The Dow sat at about that same place until 1982. Can you imagine that these days? Things looked bad when it dropped to 18,500. And the The London Stock Exchange held out going digital until 1986 - just a few years after the dow finally moved over a thousand. Think about that as it hovers around $26,000 today. And look at the companies and imagine which could get by without computers running their company - much less which are computer companies. There are 2 to 6 billion trades a day. It would probably take more than the population of Russia just to push those numbers if it all weren’t digital. In fact now, there’s an app (or a lot of apps) for that. But the point is, going back to Bush’s Memex, computers were to aid in human decision making. In a world with an exploding amount of data about every domain, Bush had prophesied the Memex would help connect us to data and help us to do more. That underlying tenant infected everyone that read his article and is something I think of every time I evaluate an investment thesis based on automation. 

There’s another point I’d like to add to this most excellent book. Computers developed in the US were increasingly general purpose and democratized. This led to innovative new applications just popping up and changing the world, like spreadsheets and word processors. Innovators weren’t just taking a factory “online” to track the number of widgets sold and deploying ICBMs - they were foundations for building anything a young developer wanted to build. The uses in education with PLATO, in creativity with Sketchpad, in general purpose languages and operating systems, in early online communities with mail and bulletin boards, in the democratization of the computer itself with the rise of the pc and the rapid proliferation with the introduction of games, and then the democratization of raw information with the rise of gopher and the web and search engines. Miniaturized and in our pockets, those are the building blocks of modern society. And the word democratization to me means a lot.

But as Peters points out, sometimes the Capitalists act like Communists. Today we close down access to various parts of those devices by the developers in order to protect people. I guess the difference is now we can build our own but since so many of us do that at #dayjob we just want the phone to order us dinner. Such is life and OODA loops.

In retrospect, it’s easy to see how technological determinism would lead to global information networks. It’s easy to see electronic banking and commerce and that people would pay for goods in apps. As the Amazon stock soars over $3,000 and what Jack Ma has done with Alibaba and the empires built by the technopolies at Amazon, Apple, Microsoft, and dozens of others. In retrospect, it’s easy to see the productivity gains. But at the time, it was hard to see the forest through the trees. The infighting got in the way. The turf-building. The potential of a bullet in the head from your contemporaries when they get in power can do that I guess. 

And so the networks failed to be developed in the USSR and ARPAnet would be transferred to the National Science Foundation in 1985, and the other nets would grow until it was all privatized into the network we call the Internet today, around the same time the Soviet Union was dissolved. As we covered in the episode on the history of computing in Poland, empires simply grow beyond the communications mediums available at the time. By the fall of the Soviet Union, US organizations were networking in a build up from early adopters, who made great gains in productivity increases and signaled the chasm crossing that was the merging of the nets into the Internet. And people were using modems to connect to message boards and work with data remotely. Ironically, that merged Internet that China has splinterneted and that Russia seems poised to splinter further. But just as hiding Wiener’s cybernetics book from the Russian people slowed technological determinism in that country, cutting various parts of the Internet off in Russia will slow progress if it happens.

The Soviets did great work on macro and micro economic tracking and modeling under Glushkov and Kitov. Understanding what you have and how data and products flow is one key aspect of automation. And sometimes even more important in helping humans make better-informed decisions. Chile tried something similar in 1973 under Salvador Allende, but that system failed as well. 

And there’s a lot to digest in this story. But that word progress is important. Let’s say that Russian or Chinese crackers steal military-grade technology from US or European firms. Yes, they get the tech, but not the underlying principals that led to the development of that technology. Just as the US and partners don’t proliferate all of their ideas and ideals by restricting the proliferation of that technology in foreign markets. Phil Zimmerman opened floodgates when he printed the PGP source code to enable the export of military-grade encryption. The privacy gained in foreign theaters contributed to greater freedoms around the world. And crime. But crime will happen in an oppressive regime just as it will in one espousing freedom. 

So for you hackers tuning in - whether you’re building apps, hacking business, or reingineering for a better tomorrow: next time you’re sitting in a meeting and progress is being smothered at work or next time you see progress being suffocated by a government, remember that those who you think are trying to hold you back either don’t see what you see, are trying to protect their own power, or they might just be trying to keep progress from outpacing what their constituents are ready for. And maybe those are sometimes the same thing, just from a different perspective. Because go fast at all costs not only leaves people behind but sometimes doesn’t build a better mousetrap than what we have today. Or, go too fast and like Kitov you get stripped of your command. No matter how much of a genius you, or your contemporary Glushkov are. The YouTube video called “Internet of Colonel Kitov” has a great quote: “pioneers are recognized by the arrows sticking out of their backs.” But hey, at least history was on their side! 

Thank you for tuning in to the History of Computing Podcast. We are so, so, so lucky to have you. Have a great day and I hope you too are on the right side of history!


Bob Tayler: ARPA to PARC to DEC

     1/15/2021

Robert Taylor was one of the true pioneers in computer science. In many ways, he is the string (or glue) that connected the US governments era of supporting computer science through ARPA to innovations that came out of Xerox PARC and then to the work done at Digital Equipment Corporation’s Systems Research Center. Those are three critical aspects of the history of computing and while Taylor didn’t write any of the innovative code or develop any of the tools that came out of those three research environments, he saw people and projects worth funding and made sure the brilliant scientists got what they needed to get things done.

The 31 years in computing that his stops represented were some of the most formative years for the young computing industry and his ability to inspire the advances that began with Vannevar Bush’s 1945 article called “As We May Think” then ended with the explosion of the Internet across personal computers. 

Bob Taylor inherited a world where computing was waking up to large crusty but finally fully digitized mainframes stuck to its eyes in the morning and went to bed the year Corel bought WordPerfect because PCs needed applications, the year the Pentium 200 MHz was released, the year Palm Pilot and eBay were founded, the year AOL started to show articles from the New York Times, the year IBM opened a we web shopping mall and the year the Internet reached 36 million people. Excite and Yahoo went public. Sometimes big, sometimes small, all of these can be traced back to Bob Taylor - kinda’ how we can trace all actors to Kevin Bacon. But more like if Kevin Bacon found talent and helped them get started, by paying them during the early years of their careers… 

How did Taylor end up as the glue for the young and budding computing research industry? Going from tween to teenager during World War II, he went to Southern Methodist University in 1948, when he was 16. He jumped into the US Naval Reserves during the Korean War and then got his masters in psychology at the University of Texas at Austin using the GI Bill. Many of those pioneers in computing in the 60s went to school on the GI Bill. It was a big deal across every aspect of American life at the time - paving the way to home ownership, college educations, and new careers in the trades. From there, he bounced around, taking classes in whatever interested him, before taking a job at Martin Marietta, helping design the MGM-31 Pershing and ended up at NASA where he discovered the emerging computer industry. 

Taylor was working on projects for the Apollo program when he met JCR Licklider, known as the Johnny Appleseed of computing. Lick, as his friends called him, had written an article called Man-Computer Symbiosis in 1960 and had laid out a plan for computing that influenced many. One such person, was Taylor. And so it was in 1962 he began and in 1965 that he succeeded in recruiting Taylor away from NASA to take his place running ARPAs Information Processing Techniques Office, or IPTO. 

Taylor had funded Douglas Engelbart’s research on computer interactivity at Stanford Research Institute while at NASA. He continued to do so when he got to ARPA and that project resulted in the invention of the computer mouse and the Mother of All Demos, one of the most inspirational moments and a turning point in the history of computing. 

They also funded a project to develop an operating system called Multics. This would be a two million dollar project run by General Electric, MIT, and Bell Labs. Run through Project MAC at MIT there were just too many cooks in the kitchen. Later, some of those Bell Labs cats would just do their own thing. Ken Thompson had worked on Multics and took the best and worst into account when he wrote the first lines of Unix and the B programming language, then one of the most important languages of all time, C. 

Interactive graphical computing and operating systems were great but IPTO, and so Bob Taylor and team, would fund straight out of the pentagon, the ability for one computer to process information on another computer. Which is to say they wanted to network computers. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they’d awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS would connect a number of sites and route traffic and the first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET, the commonly accepted precursor to the Internet. 

There was another networking project going on at the time that was also getting funding from ARPA as well as the Air Force, PLATO out of the University of Illinois. PLATO was meant for teaching and had begun in 1960, but by then they were on version IV, running on a CDC Cyber and the time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. 

Then things get weird. Taylor is sent to Vietnam as a civilian, although his rank equivalent would be a brigadier general. He helped develop the Military Assistance Command in Vietnam. Battlefield operations and reporting were entering the computing era. Only problem is, while Taylor was a war veteran and had been deep in the defense research industry for his entire career, Vietnam was an incredibly unpopular war and seeing it first hand and getting pulled into the theater of war, had him ready to leave. This combined with interpersonal problems with Larry Roberts who was running the ARPA project by then over Taylor being his boss even without a PhD or direct research experience. And so Taylor joined a project ARPA had funded at the University of Utah and left ARPA. 

There, he worked with Ivan Sutherland, who wrote Sketchpad and is known as the Father of Computer Graphics, until he got another offer. This time, from Xerox to go to their new Palo Alto Research Center, or PARC. One rising star in the computer research world was pretty against the idea of a centralized mainframe driven time sharing system. This was Alan Kay. In many ways, Kay was like Lick. And unlike the time sharing projects of the day, the Licklider and Kay inspiration was for dedicated cycles on processors. This meant personal computers. 

The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Taylor was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS. 

This new Computer Science Laboratory landed people like Charles Thacker, David Boggs, Butler Lampson, and Bob Sproul and would develop the Xerox Alto, the inspiration for the Macintosh. The Alto though contributed the very ideas of overlapping windows, icons, menus, cut and paste, word processing. In fact, Charles Simonyi from PARC would work on Bravo before moving to Microsoft to spearhead Microsoft Word.

Bob Metcalfe on that team was instrumental in developing Ethernet so workstations could communicate with ARPANET all over the growing campus-connected environments. Metcalfe would leave to form 3COM. 

SuperPaint would be developed there and Alvy Ray Smith would go on to co-found Pixar, continuing the work begun by Richard Shoup. 

They developed the Laser Printer, some of the ideas that ended up in TCP/IP, and the their research into page layout languages would end up with Chuck Geschke, John Warnock and others founding Adobe. 

Kay would bring us the philosophy behind the DynaBook which decades later would effectively become the iPad. He would also develop Smalltalk with Dan Ingalls and Adele Goldberg, ushering in the era of object oriented programming. 

They would do pioneering work on VLSI semiconductors, ubiquitous computing, and anything else to prepare the world to mass produce the technologies that ARPA had been spearheading for all those years. Xerox famously did not mass produce those technologies. And nor could they have cornered the market on all of them. The coming waves were far too big for one company alone. 

And so it was that PARC, unable to bring the future to the masses fast enough to impact earnings per share, got a new director in 1983 and William Spencer was yet another of three bosses that Taylor clashed with. Some resented that he didn’t have a PhD in a world where everyone else did. Others resented the close relationship he maintained with the teams. Either way, Taylor left PARC in 1983 and many of the scientists left with him. 

It’s both a curse and a blessing to learn more and more about our heroes. Taylor was one of the finest minds in the history of computing. His tenure at PARC certainly saw the a lot of innovation and one of the most innovative teams to have ever been assembled. But as many of us that have been put into a position of leadership, it’s easy to get caught up in the politics. I am ashamed every time I look back and see examples of building political capital at the expense of a project or letting an interpersonal problem get in the way of the greater good for a team. But also, we’re all human and the people that I’ve interviewed seem to match the accounts I’ve read in other books. 

And so Taylor’s final stop was Digital Equipment Corporation where he was hired to form their Systems Research Center in Palo Alto. They brought us the AltaVista search engine, the Firefly computer, Modula-3 and a few other advances. Taylor retired in 1996 and DEC was acquired by Compaq in 1998 and when they were acquired by HP the SRC would get merged with other labs at HP. 

From ARPA to Xerox to Digital, Bob Taylor certainly left his mark on computing. He had a knack of seeing the forest through the trees and inspired engineering feats the world is still wrestling with how to bring to fruition. Raw, pure science. He died in 2017. He worked with some of the most brilliant people in the world at ARPA. He inspired passion, and sometimes drama in what Stanford’s Donald Knuth called “the greatest by far team of computer scientists assembled in one organization.” 

In his final email to his friends and former coworkers, he said “You did what they said could not be done, you created things that they could not see or imagine.” The Internet, the Personal Computer, the tech that would go on to become Microsoft Office, object oriented programming, laser printers, tablets, ubiquitous computing devices. So, he isn’t exactly understating what they accomplished in a false sense of humility. I guess you can’t do that often if you’re going to inspire the way he did. 

So feel free to abandon the pretense as well, and go inspire some innovation. Heck, who knows where the next wave will come from. But if we aren’t working on it, it certainly won’t come.

Thank you so much and have a lovely, lovely day. We are so lucky to have you join us on yet another episode. 


(OldComputerPods) ©Sean Haas, 2020