The History of Computing

Hosted by Charles Edge

Computers touch all most every aspect of our lives today. We take the way they work for granted and the unsung heroes who built the technology, protocols, philosophies, and circuit boards, patched them all together - and sometimes willed amazingness out of nothing. Not in this podcast. Welcome to the History of Computing. Let's get our nerd on!

135 Episodes


The Laws And Court Cases That Shaped The Software Industry


The largest global power during the rise of intellectual property was England, so the world adopted her philosophies. The US had the same impact on software law.

Most case law that shaped the software industry is based on copyright law. Our first real software laws appeared in the 1970s and now have 50 years of jurisprudence to help guide us. This episode looks at the laws, supreme court cases, and some circuit appeals cases that shaped the software industry.


In our previous episode we went through a brief review of how the modern intellectual property laws came to be. Patent laws flowed from inventors in Venice in the 1400s, royals gave privileges to own a monopoly to inventors throughout the rest of Europe over the next couple of centuries, transferred to panels and academies during and after the Age of Revolutions, and slowly matured for each industry as technology progressed. 

Copyright laws formed similarly, although they were a little behind patent laws due to the fact that they weren’t really necessary until we got the printing press. But when it came to data on a device, we had a case in 1908 we covered in the previous episode that led Congress to enact the 1909 Copyright Act. 

Mechanical music boxes evolved into mechanical forms of data storage and computing evolved from mechanical to digital. Following World War II there was an explosion in new technologies, with those in computing funded heavily by US government. Or at least, until we got ourselves tangled up in a very unpopular asymmetrical war in Vietnam. The Mansfield Amendment of 1969, was a small bill in the 1970 Military Authorization Act that ended the US military from funding research that didn’t have a direct relationship to a specific military function. Money could still flow from ARPA into a program like the ARPAnet because we wanted to keep those missiles flying in case of nuclear war. But over time the impact was that a lot of those dollars the military had pumped into computing to help develop the underlying basic sciences behind things like radar and digital computing was about to dry up. This is a turning point: it was time to take the computing industry commercial. And that means lawyers.

And so we got the first laws pertaining to software shortly after the software industry emerged from more and more custom requirements for these mainframes and then minicomputers and the growing collection of computer programmers. The Copyright Act of 1976 was the first major overhaul to the copyright laws since the 1909 Copyright Act. Since then, the US had become a true world power and much as the rest of the world followed the British laws from the Statute of Anne in 1709 as a template for copyright protections, the world looked on as the US developed their laws. Many nations had joined the Berne Convention for international copyright protections, but the publishing industry had exploded. We had magazines, so many newspapers, so many book publishers. And we had this whole weird new thing to deal with: software. 

Congress didn’t explicitly protect software in the Copyright Act of 1976. But did add cards and tape as mediums and Congress knew this was an exploding new thing that would work itself out in the courts if they didn’t step in. And of course executives from the new software industry were asking their representatives to get in front of things rather than have the unpredictable courts adjudicate a weird copyright mess in places where technology meets copy protection. So in section 117, Congress appointed the National Commission on New Technological Uses of Copyrighted Works, or CONTU) to provide a report about software and added a placeholder in the act that empaneled them.

CONTU held hearings. They went beyond just software as there was another newish technology changing the world: photocopying. They presented their findings in 1978 and recommended we define a computer program as a set of statements or instructions to be used directly or indirectly in a computer in order to bring about a certain result. They also recommended that copies be allowed if required to use the program and that those be destroyed when the user no longer has rights to the software. This is important because this is an era where we could write software into memory or start installing compiled code onto a computer and then hand the media used to install it off to someone else. 

At the time the hobbyist industry was just about to evolve into the PC industry, but hard disks were years out for most of those machines. It was all about floppies. But up-market there was all kinds of storage and the righting was on the wall about what was about to come. Install software onto a computer, copy and sell the disk, move on. People would of course do that, but not legally. 

Companies could still sign away their copyright protections as part of a sales agreement but the right to copy was under the creator’s control. But things like End User License Agreements were still far away. Imagine how ludicrous the idea that a piece of software if a piece of software went bad that it could put a company out of business in the 1970s. That would come as we needed to protect liability and not just restrict the right to copy to those who, well, had the right to do so. Further, we hadn’t yet standardized on computer languages. And yet companies were building complicated logic to automate business and needed to be able to adapt works for other computers and so congress looked to provide that right at the direction of CONTU as well, if only to the company doing the customizations and not allowing the software to then be resold. These were all hashed out and put into law in 1980.

And that’s an important moment as suddenly the party who owned a copy was the rightful owner of a piece of software. Many of the provisions read as though we were dealing with book sellers selling a copy of a book, not dealing with the intricate details of the technology, but with technology those can change so quickly and those who make laws aren’t exactly technologists, so that’s to be expected. 

Source code versus compiled code also got tested. In 1982 Williams Electronics v Artic International explored a video game that was in a ROM (which is how games were distributed before disks and cassette tapes. Here, the Third Circuit weighed in on whether if the ROM was built into the machine, if it could be copied as it was utilitarian and therefore not covered under copyright. The source code was protected but what about what amounts to compiled code sitting on the ROM. They of course found that it was indeed protected. 

They again weighed in on Apple v Franklin in 1983. Here, Franklin Computer was cloning Apple computers and claimed it couldn’t clone the computer without copying what was in the ROMs, which at the time was a remedial version of what we think of as an operating system today.  Franklin claimed the OS was in fact a process or method of operation and Apple claimed it was novel. At the time the OS was converted to a binary language at runtime and that object code was a task called AppleSoft but it was still a program and thus still protected. One and two years later respectively, we got Mac OS 1 and Windows 1.

1986 saw Whelan Associates v Jaslow. Here, Elaine Whelan created a management system for a dental lab on the IBM Series One, in EDL. That was a minicomputer and when the personal computer came along she sued Jaslow because he took a BASIC version to market for the PC. He argued it was a different language and the set of commands was therefore different. But the programs looked structurally similar. She won, as while some literal elements were the same, “the copyrights of computer programs can be infringed even absent copying of the literal elements of the program.” This is where it’s simple to identify literal copying of software code when it’s done verbatim but difficult to identify non-literal copyright infringement. 

But this was all professional software. What about those silly video games all the kids wanted? Well, Atari applied for a copyright for one of their games, Breakout. Here, Register of Copyrights, Ralph Oman chose not to Register the copyright. And so Atari sued, winning in the appeal.

There were certainly other dental management packages on the market at the time. But the court found that “copyrights do not protect ideas – only expressions of ideas.” Many found fault with the decision and  the Second Circuit heard Computer Associates v Altai in 1992. Here, the court applied a three-step test of Abstraction-Filtration-Comparison to determine how similar products were and held that Altai's rewritten code did not meet the necessary requirements for copyright infringement.

There were other types of litigation surrounding the emerging digital sphere at the time as well. The Computer Fraud and Abuse Act came along in 1986 and would be amended in 89, 94, 96, and 2001. Here, a number of criminal offenses were defined - not copyright but they have come up to criminalize activities that should have otherwise been copyright cases. And the Copyright Act of 1976 along with the CONTU findings were amended to cover the rental market came to be (much as happened with VHS tapes and Congress established provisions to cover that in 1990. Keep in mind that time sharing was just ending by then but we could rent video games over dial-up and of course VHS rentals were huge at the time.

Here’s a fun one, Atari infringed on Nintendo’s copyright by claiming they were a defendant in a case and applying to the Copyright Office to get a copy of the 10NES program so they could actually infringe on their copyright. They tried to claim they couldn’t infringe because they couldn’t make games unless they reverse engineered the systems. Atari lost that one. But Sega won a similar one soon thereafter because playing more games on a Sega was fair use. Sony tried to sue Connectix in a similar case where you booted the PlayStation console using a BIOS provided by Connectix. And again, that was reverse engineering for the sake of fair use of a PlayStation people payed for. Kinda’ like jailbreaking an iPhone, right? Yup, apps that help jailbreak, like Cydia, are legal on an iPhone. But Apple moves the cheese so much in terms of what’s required to make it work so far that it’s a bigger pain to jailbreak than it’s worth. Much better than suing everyone. 

Laws are created and then refined in the courts. MAI Systems Corp. v. Peak Computer made it to the Ninth Circuit Court of Appeals in 1993. This involved Eric Francis leaving MAI and joining Peak. He then loaded MAI’s diagnostics tools onto computers. MAI thought they should have a license per computer, but yet Peak used the same disk in multiple computers. The crucial change here was that the copy made, while ephemeral, was decided to be a copy of the software and so violated the copyright. We said we’d bring up that EULA though. In 1996, the Seventh Circuit found in ProCD v Zeidenberg, that the license preempted copyright thus allowing companies to use either copyright law or a license when seeking damages and giving lawyers yet another reason to answer any and all questions with “it depends.”

One thing was certain, the digital world was coming fast in those Clinton years. I mean, the White House would have a Gopher page and Yahoo! would be on display at his second inauguration. So in 1998 we got the Digital Millennium Copyright Act (DMCA). Here, Congress added to Section 117 to allow for software copies if the software was required for maintenance of a computer. And yet software was still just a set of statements, like instructions in a book, that led the computer to a given result. The DMCA did have provisions to provide treatment to content providers and e-commerce providers. It also implemented two international treaties and provided remedies for anti-circumvention of copy-prevention systems since by then cracking was becoming a bigger thing. There was more packed in here. We got MAI Systems v Peak Computer reversed by law, refinement to how the Copyright Office works, modernizing audio and movie rights, and provisions to facilitate distance education. And of course the DMCA protected boat hull designs because, you know, might as well cram some stuff into a digital copyright act. 

In addition to the cases we covered earlier, we had Mazer v Stein, Dymow v Bolton, and even Computer Associates v Altai, which cemented the AFC method as the means most courts determine copyright protection as it extends to non-literal components such as dialogue and images. Time and time again, courts have weighed in on what fair use is because the boundaries are constantly shifting, in part due to technology, but also in part due to shifting business models. 

One of those shifting business models was ripping songs and movies. RealDVD got sued by the MPAA for allowing people to rip DVDs. YouTube would later get sued by Viacom but courts found no punitive damages could be awarded. Still, many online portals started to scan for and filter out works they could know were copy protected, especially given the rise of machine learning to aid in the process. But those were big, major companies at the time. IO Group, Inc sued Veoh for uploaded video content and the judge found Veoh was protected by safe harbor. 

Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, which shields online portals and internet service providers from copyright infringement. This would be separate from Section 230, which protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited.

In 2016, Cisco sued Arista for allegedly copying the command line interfaces to manage switches. Cisco lost but had claimed more than $300 million in damages. Here, the existing Cisco command structure allowed Arista to recruit seasoned Cisco administrators to the cause. Cisco had done the mental modeling to evolve those commands for decades and it seemed like those commands would have been their intellectual property. But, Arista hadn’t copied the code. 

Then in 2017, in ZeniMax vs Oculus, ZeniMax wan a half billion dollar case against Oculus for copying their software architecture. 

And we continue to struggle with what copyright means as far as code goes. Just in 2021, the Supreme Court ruled in Google v Oracle America that using application programming interfaces (APIs) including representative source code can be transformative and fall within fair use, though did not rule if such APIs are copyrightable. I’m sure the CP/M team, who once practically owned the operating system market would have something to say about that after Microsoft swooped in with and recreated much of the work they had done. But that’s for another episode.

And traditional media cases continue. ABS Entertainment vs CBS looked at whether digitally remastering works extended copyright. BMG vs Cox Communications challenged peer-to-peer file-sharing in safe harbor cases (not to mention the whole Napster testifying before congress thing). You certainly can’t resell mp3 files the way you could drop off a few dozen CDs at Tower Records, right? Capitol Records vs ReDigi said nope. Perfect 10 v Amazon, Goldman v Breitbart, and so many more cases continued to narrow down who and how audio, images, text, and other works could have the right to copy restricted by creators. But sometimes it’s confusing. Dr. Seuss vs ComicMix found that merging Star Trek and “Oh, the Places You’ll Go” was enough transformativeness to break the copyright of Dr Seuss, or was that the Fair Use Doctrine? Sometimes I find conflicting lines in opinions. Speaking of conflict…

Is the government immune from copyright? Allen v Cooper, Governor of North Carolina made it to the Supreme Court, where they applied blanket copyright protections. Now, this was a shipwreck case but extended to digital works and the Supreme Court seemed to begrudgingly find for the state, and looked to a law as remedy rather than awarding damages. In other words, the “digital Blackbeards” of a state could pirate software at will. Guess I won’t be writing any software for the state of North Carolina any time soon!

But what about content created by a state? Well, the state of Georgia makes various works available behind a paywall. That paywall might be run by a third party in exchange for a cut of the proceeds. So Public.Resource goes after anything where the edict of a government isn’t public domain. In other words, court decision, laws, and statutes should be free to all who wish to access them. The “government edicts doctrine” won in the end and so access to the laws of the nation continue to be free.

What about algorithms? That’s more patent territory when they are actually copyrightable, which is rare. Gottschalk v. Benson was denied a patent for a new way to convert binary-coded decimals to numerals while Diamond v Diehr saw an algorithm to run a rubber molding machine was patentable. And companies like Intel and Broadcom hold thousands of patents for microcode for chips.

What about the emergence of open source software and the laws surrounding social coding? We’ll get to the emergence of open source and the consequences in future episodes!

One final note, most have never heard of the names in early cases. Most have heard of the organizations listed in later cases. Settling issues in the courts has gotten really, really expensive. And it doesn’t always go the way we want. So these days, whether it’s Apple v Samsung or other tech giants, the law seems to be reserved for those who can pay for it. Sure, there’s the Erin Brockovich cases of the world. And lady justice is still blind. We can still represent ourselves, case and notes are free. But money can win cases by having attorneys with deep knowledge (which doesn’t come cheap). And these cases drag on for years and given the startup assembly line often halts with pending legal actions, not many can withstand the latency incurred. This isn’t a “big tech is evil” comment as much as “I see it and don’t know a better rubric but it’s still a thing” kinda’ comment.

Here’s something better that we’d love to have a listener take away from this episode. Technology is always changing. Laws usually lag behind technology change as (like us) they’re reactive to innovation. When those changes come, there is opportunity. Not only has the technological advancement gotten substantial enough to warrant lawmaker time, but the changes often create new gaps in markets that new entrants can leverage. Either leaders in markets adapt quickly or see those upstarts swoop in, having no technical debt and being able to pivot faster than those who previously might have enjoyed a first user advantage. What laws are out there being hashed out, just waiting to disrupt some part of the software market today?

Origins of the Modern Patent And Copyright Systems


Once upon a time, the right to copy text wasn’t really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history. 

The printing press spread from Gutenberg’s workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further. 

England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers’ Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society. 

The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They’d clearly become less protectionist and more concerned about a literate society. 

There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that’s been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish. 

Still, this wasn’t exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review.

English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested. 

They weren’t just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they’re easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they’d long been removed from the monarchal control and handed to various public institutions.

The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutions throughout the 1820s , the end of Feudalism, Napoleon. But the wars settled down and a world order of sorts came during the late 1800s. One aspect of that world order was the Berne Convention, which was signed in 1886. This  established the bilateral recognition of copyrights among sovereign nations that signed onto the treaty, rather than have various nations enter into pacts between one another. Now, the right to copy works were automatically in force at creation, so authors no longer had to register their mark in Berne Convention countries.

Following the Age of Revolutions, there was also an explosion of inventions around the world. Some ended up putting copyrighted materials onto reproducible forms. Early data storage. Previously we could copyright sheet music but the introduction of the player piano led to the need to determine the copyright ability of piano rolls in White-Smith Music v. Apollo in 1908. Here we saw the US Supreme Court find that these were not copies as interpreted in the US Copyright Act because only a machine could read them and they basically told congress to change the law. So Congress did.

The Copyright Act of 1909 then specified that even if only a machine can use information that’s protected by copyright, the copyright protection remains. And so things sat for a hot minute as we learned first mechanical computing, which is patentable under the old rules and then electronic computing which was also patentable. Jacquard patented his punch cards in 1801. But by the time Babbage and Lovelace used them in his engines that patent had expired. And the first digital computer to get a patent was the Eckert-Mauchly ENIAC, which was filed in 1947, granted in 1964, and because there was a prior unpatented work, overturned in 1973. Dynamic RAM was patented in 1968. But these were for physical inventions.

Software took a little longer to become a legitimate legal quandary. The time it took to reproduce punch cards and the lack of really mass produced software didn’t become an issue until after the advent of transistorized computers with Whirlwind, the DEC PDP, and the IBM S/360.

Inventions didn’t need a lot of protections when they were complicated and it took years to build one. I doubt the inventor of the Antikythera Device in Ancient Greece thought to protect their intellectual property because they’d of likely been delighted if anyone else in the world would have thought to or been capable of creating what they created. Over time, the capabilities of others rises and our intellectual property becomes more valuable because progress moves faster with each generation. Those Venetians saw how technology and automation was changing the world and allowed the protection of inventions to provide a financial incentive to invent. Licensing the commercialization of inventions then allows us to begin the slow process of putting ideas on a commercialization assembly line. 

Books didn’t need copyright until they could be mass produced and were commercially viable. That came with mass production. A writer writes, or creates intellectual property and a publisher prints and distributes. Thus we put the commercialization of literature and thoughts and ideas on an assembly line. And we began doing so far before the Industrial Revolution. 

Once there were more inventions and some became capable of mass producing the registered intellectual property of others, we saw a clash in copyrights and patents. And so we got the Copyright Act of 1909. But with digital computers we suddenly had software emerging as an entire industry. IBM had customized software for customers for decades but computer languages like FORTRAN and mass storage devices that could be moved between computers allowed software to be moved between computers and sometimes entire segments of business logic moved between companies based on that software. By the 1960s, companies were marketing computer programs as a cottage industry. 

The first computer program was deposited at the US Copyright Office in 1961. It was a simple thing. A tape with a computer program that had been filed by North American Aviation. Imagine the examiners looking at it with their heads cocked to the side a bit. “What do we do with this?” They hadn’t even figured it out when they got three more from General Dynamics and two more programs showed up from a student at Columbia Law. 

A punched tape held a bunch of punched cards. A magnetic tape just held more punched tape that went faster. This was pretty much what those piano rolls from the 1909 law had on them. Registration was added for all five in 1964. And thus software copyright was born. But of course it wasn’t just a metallic roll that had impressions for when a player piano struck a hammer. If someone found a roll on the ground, they could put it into another piano and hit play. But the likelihood that they could put reproduce the piano roll was low. The ability to reproduce punch cards had been there. But while it likely didn’t take the same amount of time it took to reproduce a copy Plato’s Republic before the advent of the printing press, the occurrences weren’t frequent enough to encounter a likely need for adjudication. That changed with high speed punch devices and then the ability to copy magnetic tape.

Contracts (which we might think of as EULAs today in a way) provided a license for a company to use software, but new questions were starting to form around who was bound to the contract and how protection was extended based on a number of factors. Thus the LA, or License Agreement part of EULA rather than just a contract when buying a piece of software. 

And this brings us to the forming of the modern software legal system. That’s almost a longer story than the written history we have of early intellectual property law, so we’ll pick that up in the next episode of the podcast!

A History Of Text Messages In A Few More Than 160 Characters


Texts are sent and received using SMS, or Short Message Service. Due to the amount of bandwidth available on second generation networks, they were limited to 160 characters initially. You know the 140 character max from Twitter, we are so glad you chose to join us on this journey where we weave our way from the topmast of the 1800s to the skinny jeans of San Francisco with Twitter.

What we want you to think about through this episode is the fact that this technology has changed our lives. Before texting we had answering machines, we wrote letters, we sent more emails but didn’t have an expectation of immediate response. Maybe someone got back to us the next day, maybe not. But now, we rely on texting to coordinate gatherings, pick up the kids, get a pin on a map, provide technical support, send links, send memes, convey feelings in ways that we didn’t do when writing letters. I mean including an animated gif in a letter meant melty peanut butter. Wait, that’s jif. Sorry.

And few technologies have sprung into our every day use so quickly in the history of technology. It took generations if not 1,500 years for bronze working to migrate out of the Vinča Culture and bring an end to the Stone Age. It took a few generations if not a couple of hundred years for electricity to spread throughout the world. The rise of computing took a few generations to spread from first mechanical then to digital and then to personal computing and now to ubiquitous computing. And we’re still struggling to come to terms with job displacement and the productivity gains that have shifted humanity more rapidly than any other time including the collapse of the Bronze Age. 

But the rise of cellular phones and then the digitization of them combined with globalization has put instantaneous communication in the hands of everyday people around the world. We’ve decreased our reliance on paper and transporting paper and moved more rapidly into a digital, even post-PC era. And we’re still struggling to figure out what some of this means. But did it happen as quickly as we identify? Let’s look at how we got here.

Bell Telephone introduced the push button phone in 1963 to replace the rotary dial telephone that had been invented in 1891 and become a standard. And it was only a matter of time before we’d find a way to associate letters to it. Once we could send bits over devices instead of just opening up a voice channel it was only a matter of time before we’d start sending data as well. Some of those early bits we sent were things like typing our social security number or some other identifier for early forms of call routing. Heck the fax machine was invented all the way back in 1843 by a Scottish inventor called Alexander Bain. 

So given that we were sending different types of data over permanent and leased lines it was only a matter of time before we started doing so over cell phones. 

The first cellular networks were analog in what we now think of as first generation, or 1G. GSM, or Global System for Mobile Communications is a standard that came out of the European Telecommunications Standards Institue and started getting deployed in 1991. That became what we now think of as 2G and paved the way for new types of technologies to get rolled out.

The first text message simply said “Merry Christmas” and was sent on December 3rd, 1992. It was sent to Richard Jarvis at Vodafone by Neil Papworth. As with a lot of technology it was actually thought up eight years earlier by Bernard Ghillabaert and Friedhelm Hillebrand. From there, the use cases moved to simply alerting devices of various statuses, like when there was a voice mail. These days we mostly use push notification services for that. 

To support using SMS for that, carriers started building out SMS gateways and by 1993 Nokia was the first cell phone maker to actually support end-users sending text messages. Texting was expensive at first, but adoption slowly increased. We could text in the US by 1995 but cell phone subscribers were sending less than 6 texts a year on average. But as networks grew and costs came down, adoption increased up to a little over one a day by the year 2000. 

Another reason adoption was slow was because using multi-tap to send a message sucked. Multi-tap was where we had to use the 10-key pad on a device to type out messages. You know, ABC are on a 2 key so the first type you tap two it’s the number the next time it’s an A, the next a B, the next a C. And the 3 key is D, E, and F. The 4 is G, H, and I and the 5 is J, K, and L. The 6 is M, N, and O and the 7 is P, Q, R, and S. The 8 is T, U, and V and the 9 is W, X, Y, and Z. This layout goes back to old bell phones that had those letters printed under the numbers. That way if we needed to call 1-800-PODCAST we could map which letters went to what. 

A small company called Research in Motion introduced an Inter@active Pager in 1996 to do two-way paging. Paging services went back decades. My first was a SkyTel, which has its roots in Mississippi when John N Palmer bought a 300 person paging company using an old-school radio paging service. That FCC license he picked up evolved to more acquisitions through Alabama, Loisiana, New York and by the mid-80s growing nationally to 30,000 subscribers in 1989 and over 200,000 less than four years later. A market validated, RIM introduced the BlackBerry on the DataTAC network in 2002, expanding from just text to email, mobile phone services, faxing, and now web browsing. We got the Treo the same year. But that now iconic Blackberry keyboard. Nokia was the first cellular device maker to make a full keyboard for their Nokia 9000i Communicator in 1997, so it wasn’t an entirely new idea.

But by now, more and more people were thinking of what the future of Mobility would look like. The 3rd Generation Partnership Project, or 3GPP was formed in 1998 to dig into next generation networks. They began as an initiative at Nortel and AT&T but grew to include NTT DoCoMo, British Telecom, BellSouth, Ericsson, Telnor, Telecom Italia, and France Telecom - a truly global footprint. With a standards body in place, we could move faster and they began planning the roadmap for 3G and beyond (at this point we’re on 5G). 

Faster data transfer rates let us do more. We weren’t just sending texts any more. MMS, or Multimedia Messaging Service was then introduced and use grow to billions and then hundreds of millions of photos sent encoded using technology like what we do with MIME for multimedia content on websites. At this point, people were paying a fee for every x number of messages and ever MMS. Phones had cameras now so in a pre-Instagram world this was how we were to share them. Granted they were blurry by modern standards, but progress. Devices became more and more connected as data plans expanded to eventually often be unlimited.

But SMS was still slow to evolve in a number of ways. For example, group chat was not really much of a thing. That is, until 2006 when a little company called Twitter came along to make it easy for people to post a message to their friends. Initially it worked over text message until they moved to an app. And texting was used by some apps to let users know there was data waiting for them. Until it wasn’t. Twilio was founded in 2008 to make it easy for developers to add texting to their software. Now every possible form of text integration was as simple as importing a framework.

Apple introduced the Apple Push Notification service, or APNs in 2009. By then devices were always connected to the Internet and the send and receive for email and other apps that were fine on desktops were destroying battery life. APNs then allowed developers to build apps that could only establish a communication channel when they had data. Initially we used 256 bytes in push notifications but due to the popularity and different implementation needs, notifications could grow to 2 kilobytes in 2015 and moved to an HTTP/2 interface and a 4k payload in 2015. This is important because it paved the way for iChat, now called iMessage or just Messages - and then other similar services for various platforms that moved instant messaging off SMS and over to the vendor who builds a device rather than using SMS or MMS messaging. 

Facebook Messenger came along in 2011, and now the kids use Instagram messaging, Snapchat, Signal or any number of other messaging apps. Or they just text. It’s one of a billion communications tools that also include Discord, Slack, Teams, LinkedIn, or even the in-game options in many a game. Kinda’ makes restricting communications a bit of a challenge at this point and restricting spam. 

My kid finishes track practice early. She can just text me. My dad can’t make it to dinner. He can just text me. And of course I can get spam through texts. And everyone can message me on one of about 10 other apps on my phone. And email. On any given day I receive upwards of 300 messages, so sometimes it seems like I could just sit and respond to messages all day every day and still never be caught up. And get this - we’re better for it all. We’re more productive, we’re more well connected, and we’re more organized. Sure, we need to get better at having more meaningful reactions when we’re together in person. We need to figure out what a smaller, closer knit group of friends is like and how to be better at being there for them rather than just sending a sad face in a thread where they’re indicating their pain. 

But there’s always a transition where we figure out how to embrace these advances in technology. There are always opportunities in the advancements and there are always new evolutions built atop previous evolutions. The rate of change is increasing. The reach of change is increasing. And the speed changes propagate are unparalleled today. Some will rebel against changes, seeking solace in older ways. It’s always been like that - the Amish can often be seen on a buggy pulled by a horse so a television or phone capable of texting would certainly be out of the question. Others embrace technology faster than some of us are ready for. Like when I realized some people had moved away from talking on phones and were pretty exclusively texting. Spectrums.

I can still remember picking up the phone and hearing a neighbor on with a friend. Party lines were still a thing in Dahlonega, Georgia when I was a kid. I can remember the first dedicated line and getting in trouble for running up a big long distance bill. I can remember getting our first answering machine and changing messages on it to be funny. Most of that was technology that moved down market but had been around for a long time. The rise of messaging on the cell phone then smart phone though - that was a turning point that started going to market in 1993 and within 20 years truly revolutionized human communication. How can we get messages faster than instant? Who knows, but I look forward to finding out. 

Project Xanadu


Java, Ruby, PHP, Go. These are web applications that dynamically generate code then interpreted as a file by a web browser. That file is rarely static these days and the power of the web is that an app or browser can reach out and obtain some data, get back some xml or json or yaml, and provide an experience to a computer, mobile device, or even embedded system. The web is arguably the most powerful, transformational technology in the history of technology.

But the story of the web begins in philosophies that far predate its inception. It goes back to a file, which we can think of as a document, on a computer that another computer reaches out to and interprets. A file comprised of hypertext. Ted Nelson coined the term hypertext. Plenty of others put the concepts of linking objects into the mainstream of computing. But he coined the term that he’s barely connected to in the minds of many.  Why is that?

Tim Berners-Lee invented the World Wide Web in 1989. Elizabeth Feinler developed a registry of names that would evolve into DNS so we could find computers online and so access those web sites without typing in impossible to remember numbers. Bob Kahn and Leonard Kleinrock were instrumental in the Internet Protocol, which allowed all those computers to be connected together, providing the schemes for those numbers. Some will know these names; most will not. 

But a name that probably doesn’t come up enough is Ted Nelson. His tale is one of brilliance and the early days of computing and the spread of BASIC and an urge to do more. It’s a tale of the hacker ethic. And yet, it’s also a tale of irreverence - to be used as a warning for those with aspirations to be remembered for something great. Or is it?

Steve Jobs famously said “real artists ship.” Ted Nelson did ship. Until he didn’t. Let’s go all the way back to 1960, when he started Project Xanadu. Actually, let’s go a little further back first. 

Nelson was born to TV directory Ralph Nelson and Celeste Holm, who won an Academy Award for her role in Gentleman’s Agreement in 1947 and took home another pair of nominations through her career, and for being the original Ado Annie in Oklahoma. His dad worked on The Twilight Zone - so of course he majored in philosophy at Swarthmore College and then went off to the University of Chicago and then Harvard for graduate school, taking a stab at film after he graduated. But he was meant for an industry that didn’t exist yet but would some day eclipse the film industry: software.

 While in school he got exposed to computers and started to think about this idea of a repository of all the world’s knowledge. And it’s easy to imagine a group of computing aficionados sitting in a drum circle, smoking whatever they were smoking, and having their minds blown by that very concept. And yet, it’s hard to imagine anyone in that context doing much more. And yet he did.

Nelson created Project Xanadu in 1960. As we’ll cover, he did a lot of projects during the remainder of his career. The Journey is what is so important, even if we never get to the destination. Because sometimes we influence the people who get there. And the history of technology is as much about failed or incomplete evolutions as it is about those that become ubiquitous. 

It began with a project while he was enrolled in Harvard grad school. Other word processors were at the dawn of their existence. But he began thinking through and influencing how they would handle information storage and retrieval. 

Xanadu was supposed to be a computer network that connected humans to one another. It was supposed to be simple and a scheme for world-wide electronic publishing. Unlike the web, which would come nearly three decades later, it was supposed to be bilateral, with broken links self-repairing, much as nodes on the ARPAnet did. His initial proposal was a program in machine language that could store and display documents. Being before the advent of Markdown, ePub, XML, PDF, RTF, or any of the other common open formats we use today, it was rudimentary and would evolve over time. Keep in mind. It was for documents and as Nelson would say later, the web - which began as a document tool, was a fork of the project. 

The term Xanadu was borrowed from Samuel Taylor Coleridge’s Kubla Khan, itself written after some opium fueled dreams about a garden in Kublai Khan’s Shangdu, or Xanadu.In his biography, Coleridge explained the rivers in the poem supply “a natural connection to the parts and unity to the whole” and he said a “stream, traced from its source in the hills among the yellow-red moss and conical glass-shaped tufts of bent, to the first break or fall, where its drops become audible, and it begins to form a channel.” 

Connecting all the things was the goal and so Xanadu was the name. He gave a talk and presented a paper called “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery in 1965 that laid out his vision. This was the dawn of interactivity in computing. Digital Equipment had launched just a few years earlier and brought the PDP-8 to market that same year. The smell of change was in the air and Nelson was right there. 

After that, he started to see all these developments around the world. He worked on a project at Brown University to develop a word processor with many of his ideas in it. But the output of that project, as with most word processors since - was to get things printed. He believed content was meant to be created and live its entire lifecycle in the digital form. This would provide perfect forward and reverse citations, text enrichment, and change management. And maybe if we all stand on the shoulders of giants, it would allow us the ability to avoid rewriting or paraphrasing the works of others to include them in own own writings. We could do more without that tedious regurgitation. 

He furthered his counter-culture credentials by going to Woodstock in 1969. Probably not for that reason, but it happened nonetheless. And he traveled and worked with more and more people and companies, learning and engaging and enriching his ideas. And then he shared them. 

Computer Lib/Dream Machines was a paperback book. Or two. It had a cover on each side. Originally published in 1974, it was one of the most important texts of the computer revolution. Steven Levy called it an epic. It’s rare to find it for less than a hundred bucks on eBay at this point because of how influential it was and what an amazing snapshot in time it represents. 

Xanadu was to be a hypertext publishing system in the form of Xanadocs, or files that could be linked to from other files. A Xanadoc used Xanalinks to embed content from other documents into a given document. These spans of text would become transclusions and change in the document that included the content when they changed in the live document. The iterations towards working code were slow and the years ticked by. That talk in 1965 gave way to the 1970s, then 80s. Some thought him brilliant. Others didn’t know what to make of it all. But many knew of his ideas for hypertext and once known it became deterministic.

Byte Magazine published many of his thoughts in 1988 called “Managing Immense Storage” and by then the personal computer revolution had come in full force. Tim Berners-Lee put the first node of the World Wide Web online the next year, using a protocol they called Hypertext Transfer Protocol, or http. Yes, the hypertext philosophy was almost a means of paying homage to the hard work and deep thinking Nelson had put in over the decades. But not everyone saw it as though Nelson had made great contributions to computing. 

“The Curse of Xanadu” was an article published in Wired Magazine in 1995. In the article, the author points out the fact that the web had come along using many of the ideas Nelson and his teams had worked on over the years but actually shipped - whereas Nelson hadn’t. Once shipped, the web rose in popularity becoming the ubiquitous technology it is today. The article looked at Xanadu as vaporware. But there is a deeper, much more important meaning to Xanadu in the history of computing. 

Perhaps inspired by the Wired article, the group released an incomplete version of Xanadu in 1998. But by then, other formats - including PDF which was invented in 1993 and .doc for Microsoft Word, were the primary mechanisms we stored documents and first gopher and then the web were spreading to interconnect humans with content.

The Xanadu story isn’t a tragedy. Would we have had hypertext as a part of Douglas Engelbart’s oNLine System without it? Would we have object-oriented programming or later the World Wide Web without it? The very word hypertext is almost an homage, even if they don’t know it, to Nelson’s work. And the look and feel of his work lives on in places like GitHub, whether directly influenced or not, where we can see changes in code side-by-side with actual production code, changes that are stored and perhaps rolled back forever.

Larry Tessler coined the term Cut and Paste. While Nelson calls him a friend in Werner Herzog’s Lo and Behold, Reveries of the Connected World, he also points out that Tessler’s term is flawed. And I think this is where we as technologists have to sometimes trim down our expectations of how fast evolutions occur. We take tiny steps because as humans we can’t keep pace with the rapid rate of technological change. We can look back and see a two steps forward and one step back approach since the dawn of written history. Nelson still doesn’t think the metaphors that harken back to paper have any place in the online written word. 

Here’s another important trend in the history of computing. As we’ve transitioned to more and more content living online exclusively, the content has become diluted. One publisher I wrote online pieces for asked that they all be +/- 700 words and asked that paragraphs be no more than 4 sentences long (preferably 3) and the sentences should be written at about a 5th or 6th grade level. Maybe Nelson would claim that this de-evolution of writing is due to search engine optimization gamifying the entirety of human knowledge and that a tool like Xanadu would have been the fix. After all, if we could borrow the great works of others we wouldn’t have to paraphrase them. But I think as with most things, it’s much more nuanced than that. 

Our always online, always connected brains can only accept smaller snippets. So that’s what we gravitate towards. Actually, we have plenty of capacity for whatever we actually choose to immerse ourselves into. But we have more options than ever before and we of course immerse ourselves into video games or other less literary pursuits. Or are they more literary? Some generations thought books to be dangerous. As do all oppressors. So who am I to judge where people choose to acquire knowledge or what kind they indulge themselves in. Knowledge is power and I’m just happy they have it. And they have it in part because others were willing to water own the concepts to ship a product. Because the history of technology is about evolutions, not revolutions. And those often take generations. And Nelson is responsible for some of the evolutions that brought us the ht in http or html. And for that we are truly grateful!

As with the great journey from Lord of the Rings, rarely is greatness found alone. The Xanadu adventuring party included Cal Daniels, Roger Gregory, Mark Miller, Stuart Greene, Dean Tribble, Ravi Pandya, became a part of Autodesk in the 80s, got rewritten in Smalltalk, was considered a rival to the web, but really is more of an evolutionary step on that journey. If anything it’s a divergence then convergence to and from Vannevar Bush’s Memex.

So let me ask this as a parting thought? Are the places you are not willing to sacrifice any of your core designs or beliefs worth the price being paid? Are they worth someone else ending up with a place in the history books where (like with this podcast) we oversimplify complex topics to make them digestible? Sometimes it’s worth it. In no way am I in a place to judge the choices of others. Only history can really do that - but when it happens it’s usually an oversimplification anyways… So the building blocks of the web lie in irreverence - in hypertext. And while some grew out of irreverence and diluted their vision after an event like Woodstock, others like Nelson and his friend Douglas Englebart forged on. And their visions didn’t come with commercial success. But as an integral building block to the modern connected world today they represent as great a mind as practically anyone else in computing. 

An Abridged History Of Instagram


This was a hard episode to do. Because telling the story of Instagram is different than explaining the meaning behind it. You see, on the face of it - Instagram is an app to share photos. But underneath that it’s much more. It’s a window into the soul of the Internet-powered culture of the world. Middle schoolers have always been stressed about what their friends think. It’s amplified on Instagram. People have always been obsessed with and copied celebrities - going back to the ages of kings. That too is on Instagram. We love dogs and cute little weird animals. So does Instagram. 

Before Instagram, we had photo sharing apps. Like Hipstamatic. Before Instagram, we had social networks - like Twitter and Facebook. How could Instagram do something different and yet, so similar? How could it offer that window into the world when the lens photos are snapped with are as though through rose colored glasses? Do they show us reality or what we want reality to be? Could it be that the food we throw away or the clothes we donate tell us more about us as humans than what we eat or keep? Is the illusion worth billions of dollars a year in advertising revenue while the reality represents our repressed shame?

Think about that as we go through this story.

If you build it, they will come. Everyone who builds an app just kinda’ automatically assumes that throngs of people will flock to the App Store, download the app, and they will be loved and adored and maybe even become rich. OK, not everyone thinks such things - and with the number of apps on the stores these days, the chances are probably getting closer to those that a high school quarterback will play in the NFL. But in todays story, that is exactly what happened. 

And Kevin Systrom had already seen it happen. He was offered a job as one of the first employees at Facebook while still going to Stanford. That’ll never be a thing. Then while on an internship he was asked to be one of the first Twitter employees. That’ll never be a thing either. But they were things, obviously!

So in 2010, Systrom started working on an app he called Burbn and within two years sold the company, then called Instagram for one billion dollars. In doing so he and his co-founder Mike Krieger helped forever changing the deal landscape for mergers and acquisitions of apps, and more profoundly giving humanity lenses with which to see a world we want to see - if not reality.

Systrom didn’t have a degree in computer science. In fact, he taught himself to code after working hours, then during working hours, and by osmosis through working with some well-known founders. 

Burbn was an app to check in and post plans and photos. It was written in HTML5 and in a Cinderella story, he was able to raise half a million dollars in funding from Baseline Ventures and Andreesen Horowitz, bringing in Mike Krieger as a co-founder. 

At the time, Hipstamatic was the top photo manipulation and filtering app. Given that the iPhone came with a camera on-par (if not better) than most digital point and shoots at the time, the pair re-evaluated the concept and instead leaned further into photo sharing, while still maintaining the location tagging.

The original idea was to swipe right and left, as we do in apps like Tinder. But instead they chose to show photos in chronological order and used a now iconic 1:1 aspect ratio, or the photos were square, so there was room on the screen to show metadata and a taste of the next photo - to keep us streaming. The camera was simple, like the Holga camera Systrom had been given while stying abroad when at Stanford. That camera made pictures a little blurry and in an almost filtered way made them loo almost artistic. 

After System graduated from Stanford in 2006, he worked at Google, then NextStop, and then got the bug to make his own app. And boy did he. One thing though, even his wife Nicole didn’t think she could take good photos having seen those from a friend of Systrom’s. He said the photos were so good because the filters. And so we got the first filter, X-Pro 2, so she could take great photos on the iPhone 3G. 

Krieger shared the first post on Instagram on July 16, 2010 and Systrom followed up within a few hours with a picture of a dog. The first of probably a billion dog photos (including a few of my own). And they officially published Instagram on the App Store in October of 2010.

After adding more and more filters, Systrom and Krieger closed in on one of the greatest growth hacks of any app: they integrated with Facebook, Twitter, and Foursquare so you could take the photo in Instagram and shoot it out to one of those apps - or all three.

At the time Facebook was more of a browser tool. Few people used the mobile app. And for those that did try and post photos on Facebook, doing so was laborious, using a mobile camera roll in the app and taking more steps than needed. Instagram became the perfect glue to stitch other apps together. And rather than always needing to come up with something witty to say like on Twitter, we could just point the camera on our phone at something and hit a button. 

The posts had links back to the photo on Instagram. They hit 100,000 users in the first week and a million users by the end of the year. Their next growth hack was to borrow the hashtag concept from Twitter and other apps, which they added in January of 2011.

Remember how Systrom interned at Odeo and turned down the offer to go straight to Twitter after college? Twitter didn’t have photo sharing at the time, but Twitter co-founder Jack Dorsey had showed System plenty of programming techniques and the two stayed in touch. He became an angel investor in a $7 million Series A and the first real influencer on the platform, sending that link to every photo to all of his Twitter followers every time he posted. The growth continued. June, 2011 they hit 5 million users, and doubled to 10 million by September of 2011. I was one of those users, posting the first photo to @krypted in the fall - being a nerd it was of the iOS 5.0.1 update screen and according to the lone comment on the photo my buddy @acidprime apparently took the same photo. 

They spent the next few months just trying to keep the servers up and running and released an Android of the app in April of 2012, just a couple of days before taking on $50 million dollars in venture capital. But that didn’t need to last long - they sold the company to Facebook for a billion dollars a few days later, effectively doubling each investor in that last round of funding and shooting up to 50 million users by the end of the month. 

At 13 employees, that’s nearly $77 million dollars per employee. Granted, much of that went to Systrom and the investors. The Facebook acquisition seemed great at first. Instagram got access to bigger resources than even a few more rounds of funding would have provided. 

Facebook helped them scale up to 100 million users within a year and following Facebook TV, and the brief but impactful release of Vine at Twitter, Instagram added video sharing, photo tagging, and the ability to add links in 2013.  Looking at a history of their feature releases, they’re slow and steady and probably the most user-centered releases I’ve seen. And in 2013, they grew to 150 million users, proving the types of rewards that come from doing so. 

With that kind of growth it might seem that it can’t last forever - and yet on the back of new editing tools, a growing team, and advertising tools, they managed to hit a staggering 300 million users in 2014.

While they released thoughtful, direct, human sold advertising before, they opened up the ability to buy ads to all advertisers, piggy backing on the Facebook ad selling platform in 2015. That’s the same year they introduced Boomerang, which looped photos in forward and reverse. It was cute for a hot minute. 

2016 saw the introduction of analytics that included demographics, impressions, likes, reach, and other tools for businesses to track performance not only of ads, but of posts. As with many tools, it was built for the famous influencers that had the ear of the founders and management team - and made available to anyone. They also introduced Instagram Stories, which was a huge development effort and they owned that they copied it from Snapchat - a surprising and truly authentic move for a Silicon Valley startup. And we could barely call them a startup any longer, shooting over half a billion users by the middle of the year and 600 million by the end of the year. 

That year, they also brought us live video, a Windows client, and one of my favorite aspects with a lot of people posting in different languages, they could automatically translate posts. 

But something else happened in 2016. Donald Trump was elected to the White House. This is not a podcast about politics but it’s safe to say that it was one of the most divisive elections in recent US history. And one of the first where social media is reported to have potentially changed the outcome. Disinformation campaigns from foreign actors combined with data illegally obtained via Cambridge Analytica on the Facebook network, combined with increasingly insular personal networks and machine learning-driven doubling down on only seeing things that appealed to our world view led to many being able to point at networks like Facebook and Twitter as having been party to whatever they thought the “other side” in an election had done wrong. 

Yet Instagram was just a photo sharing site. They put the users at the center of their decisions. They promoted the good things in life. While Zuckerberg claimed that Facebook couldn’t have helped change any outcomes and that Facebook was just an innocent platform that amplified human thoughts - Systrom openly backed Hillary Clinton. And yet, even with disinformation spreading on Instagram, they seemed immune from accusations and having to go to Capital Hill to be grilled following the election. Being good to users apparently has its benefits. 

However, some regulation needed to happen. 2017, the Federal Trade Commission steps in to force influencers to be transparent about their relationship with advertisers - Instagram responded by giving us the ability to mark a post as sponsored. Still, Instagram revenue spiked over 3 and a half billion dollars in 2017.

Instagram revenue grew past 6 billion dollars in 2018. Systrom and Krieger stepped away from Instagram that year. It was now on autopilot.  Although I think all chief executives have a 

Instagram revenue shot over 9 billion dollars in 2019. In those years they released IGTV and tried to get more resources from Facebook, contributing far more to the bottom line than they took. 

2020 saw Instagram ad revenue close in on 13.86 billion dollars with projected 2021 revenues growing past 18 billion.

In The Picture of Dorian Gray from 1890, Lord Henry describes the impact of influence as destroying our genuine and true identity, taking away our authentic motivations, and as Shakespeare would have put it - making us servile to the influencer. Some are famous and so become influencers on the product naturally, like musicians, politicians, athletes, and even the Pope. . Others become famous due to getting showcased by the @instagram feed or some other prominent person. These influencers often stage a beautiful life and to be honest, sometimes we just need that as a little mind candy. But other times it can become too much, forcing us to constantly compare our skin to doctored skin, our lifestyle to those who staged their own, and our number of friends to those who might just have bought theirs. And seeing this obvious manipulation gives some of us even more independence than we might have felt before. We have a choice: to be or not to be. 

The Instagram story is one with depth. Those influencers are one of the more visible aspects, going back to the first that posted sponsored photos from Snoop Dogg. And when Mark Zuckerberg decided to buy the company for a billion dollars, many thought he was crazy. But once they turned on the ad revenue machine, which he insisted Systrom wait on until the company had enough users, it was easy to go from 3 to 6 to 9 to over 13 and now likely over 18 billion dollars. That’s a greater than 30:1 return on investment, helping to prove that such lofty acquisitions aren’t crazy. 

It’s also a story of monopoly, or at least of suspected monopolies. Twitter tried to buy Instagram and Systrom claims to have never seen a term sheet with a legitimate offer. Then Facebook swooped in and helped fast-track regulatory approval of the acquisition. With the acquisition of WhatsApp, Facebook owns four of the top 6 social media sites, with Facebook, WhatsApp, Facebook Messenger, and Instagram all over a billion users and YouTube arguably being more of a video site than a true social network. And they tried to buy Snapchat - only the 17th ranked network. 

More than 50 billion photos have been shared through Instagram. That’s about a thousand a second. Many are beautiful...

Before the iPhone Was Apple's Digital Hub Strategy


Steve Jobs returned to Apple in 1996. At the time, most people had a digital camera, like the Canon Elph that was released that year and maybe a digital video camera and probably a computer and about 16% of Americans had a cell phone at the time. Some had a voice recorder, a Diskman, some in the audio world had a four track machine. Many had CD players and maybe even a laser disk player. 

But all of this was changing. Small, cheap microprocessors were leading to more and more digital products. The MP3 was starting to trickle around after being patented in the US that year. Netflix would be founded the next year, as DVDs started to spring up around the world. Ricoh, Polaroid, Sony, and most other electronics makers released digital video cameras. There were early e-readers, personal digital assistants, and even research into digital video recorders that could record your favorite shows so you could watch them when you wanted. In other words we were just waking up to a new, digital lifestyle. But the industries were fragmented. 

Jobs and the team continued the work begun under Gil Amelio to reduce the number of products down from 350 to about a dozen. They made products that were pretty and functional and revitalized Apple. But there was a strategy that had been coming together in their minds and it centered around digital media and the digital lifestyle. We take this for granted today, but mostly because Apple made it ubiquitous. 

Apple saw the iMac as the centerpiece for a whole new strategy. But all this new type of media and the massive files needed a fast bus to carry all those bits. That had been created back in 1986 and slowly improved on one the next few years in the form of IEEE 1394, or Firewire. Apple started it - Toshiba, Sony, Panasonic, Hitachi, and others helped bring it to device they made. Firewire could connect 63 peripherals at 100 megabits, later increased to 200 and then 400 before increasing to 3200. Plenty fast enough to transfer those videos, songs, and whatever else we wanted.

iMovie was the first of the applications that fit into the digital hub strategy. It was originally released in 1999 for the iMac DV, the first iMac to come with built-in firewire. I’d worked on Avid and SGI machines dedicated to video editing at the time but this was the first time I felt like I was actually able to edit video. It was simple, could import video straight from the camera, allow me to drag clips into a timeline and then add some rudimentary effects. Simple, clean, and with a product that looked cool. And here’s the thing, within a year Apple made it free. One catch. You needed a Mac.

This whole Digital Hub Strategy idea was coming together. Now as Steve Jobs would point out in a presentation about the Digital Hub Strategy at Macworld 2001, up to that point, personal computers had mainly been about productivity. Automating first the tasks of scientists, then with the advent of the spreadsheet and databases, moving into automating business and personal functions. A common theme in this podcast is that what drives computing is productivity, telemetry, and quality of life. The telemetry gains came with connecting humanity through the rise of the internet in the later 1990s. But these new digital devices were what was going to improve our quality of life. And for anyone that could get their hands on an iMac they were now doing so. But it still felt like a little bit of a closed ecosystem. 

Apple released a tool for making DVDs in 2001 for the Mac G4, which came with a SuperDrive, or Apple’s version of an optical drive that could read and write CDs and DVDs. iDVD gave us the ability to add menus, slideshows (later easily imported as Keynote presentations when that was released in 2003), images as backgrounds, and more. Now we could take those videos we made and make DVDs that we could pop into our DVD player and watch. Families all over the world could make their vacation look a little less like a bunch of kids fighting and a lot more like bliss. And for anyone that needed more, Apple had DVD Studio Pro - which many a film studio used to make the menus for movies for years.

They knew video was going to be a thing because going back to the 90s, Jobs had tried to get Adobe to release Premiere for the iMac. But they’d turned him down, something he’d never forget. Instead, Jobs was able to sway Randy Ubillos to bring a product that a Macromedia board member had convinced him to work on called Key Grip, which they’d renamed to Final Cut. Apple acquired the source code and development team and released it as Final Cut Pro in 1999. And iMovie for the consumer and Final Cut Pro for the professional turned out to be a home run. But another piece of the puzzle was coming together at about the same time.

Jeff Robbin, Bill Kincaid, and Dave Heller built a tool called SoundJam in 1998. They had worked on the failed Copeland project to build a new OS at Apple and afterwards, Robbin made a great old tool (that we might need again with the way extensions are going) called Conflict Catcher while Kincaid worked on the drivers for a MP3 player called the Diamond Rio. He saw these cool new MP3 things and tools like Winamp, which had been released in 1997, so decided to meet back up with Robbin for a new tool, which they called SoundJam and sold for $50. 

Just so happens that I’ve never met anyone at Apple that didn’t love music. Going back to Jobs and Wozniak. So of course they would want to do something in digital music. So in 2000, Apple acquired SoundJam and the team immediately got to work stripping out features that were unnecessary. They wanted a simple aesthetic. iMovie-esque, brushed metal, easy to use. That product was released in 2001 as iTunes.

iTunes didn’t change the way we consumed music.That revolution was already underway.  And that team didn’t just add brushed metal to the rest of the operating system. It had begun with QuickTime in 1991 but it was iTunes through SoundJam that had sparked brushed metal. 

SoundJam gave the Mac music visualizers as well. You know, those visuals on the screen that were generated by sound waves from music we were listening to. And while we didn’t know it yet, would be the end of software coming in physical boxes. But something else big. There was another device coming in the digital hub strategy. iTunes became the de facto tool used to manage what songs would go on the iPod, released in 2001 as well. That’s worthy of its own episode which we’ll do soon. 

You see, another aspect about SoundJam is that users could rip music off of CDs and into MP3s. The deep engineering work done to get the codec into the system survives here and there in the form of codecs accessible using APIs in the OS. And when combined with spotlight to find music it all became more powerful to build playlists, embed metadata, and listen more insightfully to growing music libraries. But Apple didn’t want to just allow people to rip, find, sort, and listen to music. They also wanted to enable users to create music. So in 2002, Apple also acquired a company called Emagic. Emagic would become Logic Pro and Gerhard Lengeling would in 2004 release a much simpler audio engineering tool called Garage Band. 

Digital video and video cameras were one thing. But cheap digital point and shoot cameras were everwhere all of a sudden. iPhoto was the next tool in the strategy, dropping in 2002 Here, we got a tool that could import all those photos from our cameras into a single library. Now called Photos, Apple gave us a taste of the machine learning to come by automatically finding faces in photos so we could easily make albums. Special services popped up to print books of our favorite photos. At the time most cameras had their own software to manage photos that had been developed as an after-thought. iPhoto was easy, worked with most cameras, and was very much not an after-thought. 

Keynote came in 2003, making it easy to drop photos into a presentation and maybe even iDVD. Anyone who has seen a Steve Jobs presentation understands why Keynote had to happen and if you look at the difference between many a Power Point and Keynote presentation it makes sense why it’s in a way a bridge between the making work better and doing so in ways we made home better. 

That was the same year that Apple released the iTunes Music Store. This seemed like the final step in a move to get songs onto devices. Here, Jobs worked with music company executives to be able to sell music through iTunes - a strategy that would evolve over time to include podcasts, which the moves effectively created, news, and even apps - as explored on the episode on the App Store. And ushering in an era of creative single-purpose apps that drove down the cost and made so much functionality approachable for so many. 

iTunes, iPhoto, and iMovie were made to live together in a consumer ecosystem. So in 2003, Apple reached that point in the digital hub strategy where they were able to take our digital life and wrap them up in a pretty bow. They called that product iLife - which was more a bundle of these services, along with iDVD and Garage Band. Now these apps are free but at the time the bundle would set you back a nice, easy, approachable $49. 

All this content creation from the consumer to the prosumer to the professional workgroup meant we needed more and more storage. According to the codec, we could be running at hundreds of megabytes per second of content. So Apple licensed the StorNext File System in 2004 to rescue a company called ADIC and release a 64-bit clustered file system over fibre channel. Suddenly all that new high end creative content could be shared in larger and larger environments. We could finally have someone cutting a movie in Final Cut then hand it off to someone else to cut without unplugging a firewire drive to do it. Professional workflows in a pure-Apple ecosystem were a thing. 

Now you just needed a way to distribute all this content. So iWeb in 2004, which allowed us to build websites quickly and bring all this creative content in. Sites could be hosted on MobileMe or files uploaded to a web host via FTP. Apple had dabbled in web services since the 80s with AppleLink then eWorld then iTools, .Mac, and MobileMe, the culmination of the evolutions of these services now referred to as iCloud. 

And iCloud now syncs documents and more. Pages came in 2005, Numbers came in 2007, and they were bundled with Keynote to become Apple iWork, allowing for a competitor of sorts to Microsoft Office. Later made free and ported to iOS as well. iCloud is a half-hearted attempt at keeping these synchronized between all of our devices. 

Apple had been attacking the creative space from the bottom with the tools in iLife but at the top as well. Competing with tools like Avid’s Media Composer, which had been around for the Mac going back to 1989, Apple bundled the professional video products into a single suite called Final Cut Studio. Here, Final Cut Pro, Motion, DVD Studio Pro, Soundtrack Pro, Color (obtained when Apple acquired SiliconColor and renamed it from FinalTouch), Compressor, Cinema Tools, and Qmaster for distributing the processing power for the above tools came in one big old box. iMovie and Garage Band for the consumer market and Final Cut Studio and Logic for the prosumer to professional market. And suddenly I was running around the world deploying Xsan’s into video shops, corporate taking head editing studios, and ad agencies

Another place where this happened was with photos. Aperture was released in 2005 and  offered the professional photographer tools to manage their large collection of images. And that represented the final pieces of the strategy. It continued to evolve and get better over the years. But this was one of the last aspects of the Digital Hub Strategy. 

Because there was a new strategy underway. That’s the year Apple began the development of the iPhone. And this represents a shift in the strategy. Released in 2007, then followed up with the first iPad in 2010, we saw a shift from the growth of new products in the digital hub strategy to migrating them to the mobile platforms, making them stand-alone apps that could be sold on App Stores, integrated with iCloud, and killing off those that appealed to more specific needs in higher-end creative environments, like Aperture, which went ended in 2014, and integrating some into other products, like Color becoming a part of Final Cut Pro. But the income from those products has now been eclipsed by mobile devices. Because when we see the returns from one strategy begin to crest - you know, like when the entire creative industry loves you, it’s time to move to another, bolder strategy. And that mobile strategy opened our eyes to always online (or frequently online) synchronization between products and integration with products, like we get with Handoff and other technologies today. 

In 2009 Apple acquired a company called Lala, which would later be added to iCloud - but the impact to the Digital Hub Strategy was that it paved the way for iTunes Match, a  cloud service that allowed for syncing music from a local library to other Apple devices. It was a subscription and more of a stop-gap for moving people to a subscription to license music than a lasting stand-alone product. And other acquisitions would come over time and get woven in, such as Redmatia, Beats, and Swell. 

Steve Jobs said exactly what Apple was going to do in 2001. In one of the most impressive implementations of a strategy, Apple had slowly introduced quality products that tactically ushered in a digital lifestyle since the late 90s and over the next few years. iMovie, iPhoto, iTunes, iDVD, iLife, and in a sign of the changing times - iPod, iPhone, iCloud. To signal the end of that era because it was by then ubiquitous. - then came the iPad. And the professional apps won over the creative industries. Until the strategy had been played out and Apple began laying the groundwork for the next strategy in 2005. 

That mobile revolution was built in part on the creative influences of Apple. Tools that came after, like Instagram, made it even easier to take great photos, connect with friends in a way iWeb couldn’t - because we got to the point where “there’s an app for that”. And as the tools weren’t needed, Apple cancelled some one-by-one, or even let Adobe Premiere eclipse Final Cut in many ways. Because you know, sales of the iMac DV were enough to warrant building the product on the Apple platform and eventually Adobe decided to do that. Apple built many of these because there was a need and there weren’t great alternatives. Once there were great alternatives, Apple let those limited quantities of software engineers go work on other things they needed done. Like building frameworks to enable a new generation of engineers to build amazing tools for the platform!

I’ve always considered the release of the iPad to be the end of era where Apple was introducing more and more software. From the increased services on the server platform to tools that do anything and everything. But 2010 is just when we could notice what Jobs was doing. In fact, looking at it, we can easily see that the strategy shifted about 5 years before that. Because Apple was busy ushering in the next revolution in computing. 

So think about this. Take an Apple, a Microsoft, or a Google. The developers of nearly every single operating system we use today. What changes did they put in place 5 years ago that are just coming to fruition today. While the product lifecycles are annual releases now, that doesn’t mean that when they have billions of devices out there that the strategies don’t unfold much, much slower. You see, by peering into the evolutions over the past few years, we can see where they’re taking computing in the next few years. Who did they acquire? What products will they release? What gaps does that create? How can we take those gaps and build products that get in front of them? This is where magic happens. Not when we’re too early like a General Magic was. But when we’re right on time. Unless we help set strategy upstream. Or, is it all chaos and not in the least bit predictable? Feel free to send me your thoughts!

And thank you…

The WELL, an Early Internet Community


The Whole Earth ‘lectronic Link, or WELL, was started by Stewart Brand and Larry Brilliant in 1985, and is still available at We did an episode on Stewart Brand: Godfather of the Interwebs and he was a larger than life presence amongst many of the 1980s former hippies that were shaping our digital age. From his assistance producing The Mother Of All Demos to the Whole Earth Catalog inspiring Steve Jobs and many others to his work with Ted Nelson, there’s probably only a few degrees separating him from anyone else in computing. 

Larry Brilliant is another counter-culture hero. He did work as a medical professional for the World Health Organization to eradicate smallpox and came home to teach at the University of Michigan. The University of Michigan had been working on networked conferencing since the 70s when Bob Parnes wrote CONFER, which would be used at Wayne State where Brilliant got his MD. But CONFER was a bit of a resource hog.

PicoSpan was written by Marcus Watts in 1983. Pico is a small text editor in many a UNIX variant and network is network. Why small, well, modems that dialed into bulletin boards were pretty slow back then. 

Marcus worked at NETI, who then bought the rights for PicoSpan to take to market. So Brilliant was the chairman of NETI at the time and approached Brand about starting up a bulletin-board system (BBS). Brilliant proposed NETI would supply the gear and software and that Brand would use his, uh, brand - and Whole Earth following, to fill the ranks. Brand’s non-profit The Point Foundation would own half and NETI would own the other half. 

It became an early online community outside of academia, and an important part of the rise of the splinter-nets and a holdout to the Internet. For a time, at least. 

PicoSpan gave users conferences. These were similar to PLATO Notes files, where a user could create a conversation thread and people could respond. These were (and still are) linear and threaded conversations. Rather than call them Notes like PLATO did, PicSpan referred to them as “conferences” as “online conferencing” was a common term used to describe meeting online for discussions at the time. EIES had been around going back to the 1970s, so Brand had some ideas abut what an online community could be - having used it. Given the sharp drop in the cost of storage there was something new PicoSpan could give people: the posts could last forever. Keep in mind, the Mac still didn’t ship with a hard drive in 1984. But they were on the rise. 

And those bits that were preserved were manifested in words. Brand brought a simple mantra: You Own Your Own Words. This kept the hands of the organization clean and devoid of liability for what was said on The WELL - but also harkened back to an almost libertarian bent that many in technology had at the time. Part of me feels like libertarianism meant something different in that era. But that’s a digression. Whole Earth Review editor Art Kleiner flew up to Michigan to get the specifics drawn up. NETI’s investment had about a quarter million dollar cash value. Brand stayed home and came up with a name. The Whole Earth ‘lectronic Link, or WELL. 

The WELL was not the best technology, even at the time. The VAX was woefully underpowered for as many users as The WELL would grow to, and other services to dial into and have discussions were springing up. But it was one of the most influential of the time. And not because they recreated the extremely influential Whole Earth catalog in digital form like Brilliant wanted, which would have been similar to what Amazon reviews are like now probably. But instead, the draw was the people. 

The community was fostered first by Matthew McClure, the initial director who was a former typesetter for the Whole Earth Catalog. He’d spent 12 years on a commune called The Farm and was just getting back to society. They worked out that they needed to charge $8 a month and another couple bucks an hour to make minimal a profit. 

So McClure worked with NETI to get the Fax up and they created the first conference, General. Kevin Kelly from the Whole Earth Review and Brand would start discussions and Brand mentioned The WELL in some of his writings. A few people joined, and then a few more. 

Others from The Farm would join him. Cliff Figallo, known as Cliff, was user 19 and John Coate, who went by Tex, came in to run marketing. In those first few years they started to build up a base of users.

It started with hackers and journalists, who got free accounts. And from there great thinkers joined up. People like Tom Mandel from Stanford Research Institute, or SRI. He would go on to become the editor of Time Online. His partner Nana. Howard Rheingold, who would go on to write a book called The Virtual Community. And they attracted more. Especially Dead Heads, who helped spread the word across the country during the heyday of the Grateful Dead. 

Plenty of UNIX hackers also joined. After all, the community was finding a nexus in the Bay Area at the time. They added email in 1987 and it was one of those places you could get on at least one part of this whole new internet thing. And need help with your modem? There’s a conference for that. Need to talk about calling your birth mom who you’ve never met because you were adopted? There’s a conference for that as well. Want to talk sexuality with a minister? Yup, there’s a community for that. It was one of the first times that anyone could just reach out and talk to people. And the community that was forming also met in person from time to time at office parties, furthering the cohesion. 

We take Facebook groups, Slack channels, and message boards for granted today. We can be us or make up a whole new version of us. We can be anonymous and just there to stir up conflict like on 4Chan or we can network with people in our industry like on LinkedIn. We can chat real time, which is similar to the Send option on The WELL. Or we can post threaded responses to other comments. But the social norms and trends were proving as true then as now. Communities grow, they fragment, people create problems, people come, people go. And sometimes, as we grow, we inspire. 

Those early adopters of The WELL inspired Craig Newmark of Craigslist to the growing power of the Internet. And future developers of Apple. Hippies versus nerds but not really versus, but coming to terms with going from “computers are part of the military industrial complex keeping us down” philosophy to more of a free libertarian information superhighway that persisted for decades. The thought that the computer would set us free and connect the world into a new nation, as John Perry Barlow would sum up perfectly in “A Declaration of the Independence of Cyberspace”.

By 1990 people like Barlow could make a post on The WELL from Wyoming and have Mitch Kapor, the founder of Lotus, makers of Lotus 1-2-3 show up at his house after reading the post - and they could join forces with the 5th employee of Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Foundation. And as a sign of the times that’s the same year The WELL got fully connected to the Internet.

By 1991 they had grown to 5,000 subscribers. That was the year Bruce Katz bought NETI’s half of the well for $175,000. Katz had pioneered the casual shoe market, changing the name of his families shoe business to Rockport and selling it to Reebok for over $118 million. 

The WELL had posted a profit a couple of times but by and large was growing slower than competitors. Although I’m not sure any o the members cared about that. It was a smaller community than many others but they could meet in person and they seemed to congeal in ways that other communities didn’t. But they would keep increasing in size over the next few years. In that time Fig replaced himself with Maurice Weitman, or Mo - who had been the first person to sign up for the service. And Tex soon left as well. 

Tex would go to become an early webmaster of The Gate, the community from the San Francisco Chronicle. Fig joined AOL’s GNN and then became director of community at Salon.

But AOL. You see, AOL was founded in the same year. And by 1994 AOL was up to 1.25 million subscribers with over a million logging in every day. CompuServe, Prodigy, Genie, Dephi were on the rise as well. And The WELL had thousands of posts a day by then but was losing money and not growing like the others. But I think the users of the service were just fine with that. The WELL was still growing slowly and yet for many, it was too big. Some of those left. Some stayed. Other communities, like The River, fragmented off. By then, The Point Foundation wanted out so sold their half of The WELL to Katz for $750,000 - leaving Katz as the first full owner of The WELL. 

I mean, they were an influential community because of some of the members, sure, but more because the quality of the discussions. Academics, drugs, and deeply personal information. And they had always complained about figtex or whomever was in charge - you know, the counter-culture is always mad at “The Management.” But Katz was not one of them. He honestly seems to have tried to improve things - but it seems like everything he tried blew up in his face. 

So Katz further alienated the members and fired Mo and brought on Maria Wilhelm, but they still weren’t hitting that hyper-growth, with membership getting up to around 10,000 - but by then AOL was jumping from 5,000,000 to 10,000,000. But again, I’ve not found anyone who felt like The WELL should have been going down that same path. The subscribers at The WELL were looking for an experience of a completely different sort. By 1995 Gail Williams allowed users to create their own topics and the unruly bunch just kinda’ ruled themselves in a way. There was staff and drama and emotions and hurt feelings and outrage and love and kindness and, well, community.

By the late 90s, the buzz word at many a company were all about building communities, and there were indeed plenty of communities growing. But none like The WELL. And given that some of the founders of Salon had been users of The WELL, Salon bought The WELL in 1999 and just kinda’ let it fly under the radar. The influence continued with various journalists as members. 

The web came. And the members of The WELL continued their community. Award winning but a snapshot in time in a way. Living in an increasingly secluded corner of cyberspace, a term that first began life in a present tense on The WELL, if you got it, you got it.

In 2012, after trying to sell The WELL to another company, Salon finally sold The WELL to a group of members who had put together enough money to buy it. And The WELL moved into the current, more modern form of existence.

To quote the site:

Welcome to a gathering that’s like no other. The WELL, launched back in 1985 as the Whole Earth ‘Lectronic Link, continues to provide a cherished watering hole for articulate and playful thinkers from all walks of life.

For more about why conversation is so treasured on The WELL, and why members of the community banded together to buy the site in 2012, check out the story of The WELL.

If you like what you see, join us!

It sounds pretty inviting. And it’s member supported. Like National Public Radio kinda’. In what seems like an antiquated business model, it’s $15 per month to access the community. And make no mistake, it’s a community. 

You Own Your Own Words. If you pay to access a community, you don’t sign the ownership of your words away in a EULA. You don’t sign away rights to sell your data to advertisers along with having ads shown to you in increasing numbers in a hunt for ever more revenue. You own more than your words, you own your experience. You are sovereign. 

This episode doesn’t really have a lot of depth to it. Just as most online forums lack the kind of depth that could be found on the WELL. I am a child of a different generation, I suppose.

Through researching each episode of the podcast, I often read books, conduct interviews (a special thanks to Help A Reporter Out), lurk in conferences, and try to think about the connections, the evolution, and what the most important aspects of each are. There is a great little book from Katie Hafner called The Well: A Story Of Love, Death, & Real Life. I recommend it. There’s also Howard Rheingold’s The Virtual Community and John Seabrook’s Deeper: Adventures on the Net. Oh, and From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, And the Rise of Digital Utopianism from Fred Turner and Siberia by Douglas Rushkoff. At a minimum, I recommend reading Katie Hafner’s wired article and then her most excellent book!

Oh, and to hear about other ways the 60s Counterculture helped to shape the burgeoning technology industry, check out What the Dormouse Said by John Markoff. 

And The WELL comes up in nearly every book as one of the early commercial digital communities. It’s been written about in Wired, in The Atlantic, makes appearances in books like Broad Band by Claire Evans, and The Internet A Historical Encyclopedia. 

The business models out there to build and run  and grow a company have seemingly been reduced to a select few. Practically every online community has become free with advertising and data being the currency we parlay in exchange for a sense of engagement with others. 

As network effects set in and billionaires are created, others own our words. They think the lifestyle business is quaint - that if you aren’t outgrowing a market segment that you are shrinking. And a subscription site that charges a monthly access fee to cgi code with a user experience that predates the UX field on the outside might affirm that philosophy -especially since anyone can see your real name. But if we look deeper we see a far greater truth: that these barriers keep a small corner of cyberspace special - free from Russian troll farms and election stealing and spam bots. And without those distractions we find true engagement. We find real connections that go past the surface. We find depth. It’s not lost after all. 

Thank you for being part of this little community. We are so lucky to have you. Have a great day.

Tesla: From Startup To... Startup...




Most early stage startups have, and so seemingly need, heroic efforts from brilliant innovators working long hours to accomplish impossible goals. Tesla certainly had plenty of these as an early stage startup and continues to - as do the other Elon Musk startups. He seems to truly understand and embrace that early stage startup world and those around him seem to as well.


As a company grows we have to trade those sprints of heroic output for steady streams of ideas and quality. We have to put development on an assembly line. Toyota famously put the ideas of Deming and other post-World War II process experts into their production lines and reaped big rewards - becoming the top car manufacturer in the process. 


Not since the Ford Model T birthed the assembly line had auto makers seen as large an increase in productivity. And make no mistake, technology innovation is about productivity increases. We forget this sometimes when young, innovative startups come along claiming to disrupt industries. Many of those do, backed by seemingly endless amounts of cash to get them to the next level in growth. And the story of Tesla is as much about productivity in production as it is about innovative and disruptive ideas. And the story is as much about a cult of personality as it is about massive valuations and quality manufacturing. 


The reason we’re covering Tesla in a podcast about the history of computers is at the heart of it, it’s a story about the startup culture clashing head-on with decades-old know-how in an established industry. This happens with nearly every new company: there are new ideas, an organization is formed to support the new ideas, and as the organization grows, the innovators are forced to come to terms with the fact that they have greatly oversimplified the world. 

Tesla realized this. Just as Paypal had realized it before. But it took a long time to get there. The journey began much further back. Rather than start with the discovery of the battery or the electric motor, let’s start with the GM Impact. It was initially shown off at the 1990 LA Auto Show. It’s important because Alan Cocconi was able to help take some of what GM learned from the 1987 World Solar Challenge race using the Sunraycer and start putting it into a car that they could roll off the assembly lines in the thousands. 

They needed to do this because the California Air Resources Board, or CARB, was about to require fleets to go 2% zero-emission, or powered by something other than fossil fuels, by 1998 with rates increasing every few years after that. And suddenly there was a rush to develop electric vehicles. GM may have decided that the Impact, later called the EV1, proved that the electric car just wasn’t ready for prime time, but the R&D was accelerating faster than it ever had before then. 

That was the same year that NuvoMedia was purchased by Gemstar-TVGuide International for $187 million. They’d made the Rocket eBook e-reader. That’s important because the co-founders of that company were Martin Eberhard, a University of Illinois Champaign Urbana grad, and Marc Tarpenning.

Alan Cocconi was able to take what he’d learned and form a new company, called AC Propulsion. He was able to put together a talented group and they built a couple of different cars, including the tZero. Many of the ideas that went into the first Tesla car came from the tZero, and Eberhard and Tarpenning tried to get Tom Gage and Cocconi to take their tZero into production. The tZero was a sleek sportscar that began life powered by lead-acid batteries that could get from zero to 60 in just over four seconds and run for 80-100 miles. They used similar regenerative braking that can be found in the Prius (to oversimplify it) and the car took about an hour to charge. The cars were made by hand and cost about $80,000 each. They had other projects so couldn’t focus on trying to mass produce the car. As Tesla would learn later, that takes a long time, focus, and a quality manufacturing process. 

While we think of Elon Musk as synonymous with Tesla Motors, it didn’t start that way. Tesla Motors was started in 2003 by Eberhard, who would serve as Tesla’s first chief executive officer (CEO) and Tarpenning, who would become the first chief financial officer (CFO), when AC Propulsion declined to take that tZero to market. Funding for the company was obtained from Elon Musk and others, but they weren’t that involved at first. Other than the instigation and support. It was a small shop, with a mission - to develop an electric car that could be mass produced. 

The good folks at AC Propulsion gave Eberhard and Tarpenning test drives in the tZero, and even agreed to license their EV Power System and reductive charging patents. And so Tesla would develop a motor and work on their own power train so as not to rely on the patents from AC Propulsion over time. But the opening Eberhard saw was in those batteries. The idea was to power a car with battery packs made of lithium ion cells, similar to those used in laptops and of course the Rocket eBooks that NuvoMedia had made before they sold the company. They would need funding though. So Gage was kind enough to put them in touch with a guy who’d just made a boatload of money and had also recommended commercializing the car - Elon Musk. 

This guy Musk, he’d started a space company in 2002. Not many people do that. And they’d been trying to buy ICBMs in Russia and recruiting rocket scientists. Wild. But hey, everyone used PayPal, where he’d made his money. So cool. Especially since Eberhard and Tarpenning had their own successful exit.

Musk signed on to provide $6.5 million in the Tesla Series A and they brought in another $1m to bring it to $7.5 million. Musk became the chairman of the board and they expanded to include Ian Wright during the fundraising and J.B. Straubel in 2004. Those five are considered the founding team of Tesla. 

They got to work building up a team to build a high-end electric sports car. Why? Because that’s one part of the Secret Tesla Motors Master Plan. That’s the title of a blog post Musk wrote in 2006.  You see, they were going to build a high-end hundred thousand dollar plus car. But the goal was to develop mass market electric vehicles that anyone could afford. They unveiled the prototype in 2006, selling out the first hundred in three weeks.

Meanwhile, Elon Musk’s cousins, Peter and Lyndon Rive started a company called SolarCity in 2006, which Musk also funded. They merged with Tesla in 2016 to provide solar roofs and other solar options for Tesla cars and charging stations. SolarCity, as with Tesla, was able to capitalize on government subsidies and growing to become the third most solar installations in homes with just a little over 6 percent of the market share. 

But we’re still in 2006. You see, they won a bunch of awards, got a lot of attention - now it was time to switch to general production. They worked with Lotus, a maker of beautiful cars that make up for issues with quality production in status, beauty, and luxury. They started with the Lotus Elise, increased the wheelbase and bolstered the chassis so it could hold the weight of the batteries. And they used a carbon fiber composite for the body to bring the weight back down. 

The process was slower than it seems anyone thought it would be. Everyone was working long hours, and they were burning through cash. By 2007, Eberhard stepped down as CEO. Michael Marks came in to run the company and later that year Ze’ev Drori was made CEO - he has been given the credit by many for tighting things up so they could get to the point that they could ship the Roadster. Tarpenning left in 2008. As did others, but the brain drain didn’t seem all that bad as they were able to ship their first car in 2008, after ten engineering prototypes.

The Roadster finally shipped in 2008, with the first car going to Musk. It could go for 245 miles a charge. 0 to 60 in less than 4 seconds. A sleek design language. But it was over $100,000. They were in inspiration and there was a buzz everywhere. The showmanship of Musk paired with the beautiful cars and the elites that bought them drew a lot of attention. As did the $1 million in revenue profit they earned in July of 2009, off 109 cars shipped. 

But again, burning through cash. They sold 10% of the company to Daimler AG and took a $465 million loan from the US Department of Energy. They were now almost too big to fail. 

They hit 1,000 cars sold in early 2010. They opened up to orders in Canada. They were growing. But they were still burning through cash. It was time to raise some serious capital. So Elon Musk took over as CEO, cut a quarter of the staff, and Tesla filed for an IPO in 2010, raising over $200 million. But there was something special in that S-1 (as there often is when a company opens the books to go public): They would cease production of the Roadster making way for the next big product.

Tesla cancelled the Roadster in 2012. By then they’d sold just shy of 2,500 Roadsters and been thinking through and developing the next thing, which they’d shown a prototype of in 2011. The Model S started at $76,000 and went into production in 2012. It could go 300 miles, was a beautiful car, came with a flashy tablet-inspired 17 inch display screen on the inside to replace buttons. It was like driving an iPad. Every time I’ve seen another GPS since using the one in a Model S, I feel like I’ve gotten in a time machine and gone back a decade. 

But it had been announced in 2007to ship in 2009. And then the ship date dropped back to 2011 and 2012. Let’s call that optimism and scope creep. But Tesla has always eventually gotten there. Even if the price goes up. Such is the lifecycle of all technology. More features, more cost. There are multiple embedded Ubuntu operating systems controlling various parts of car, connected on a network in the car. It’s a modern marvel and Tesla was rewarded with tons of awards and, well, sales.

Charging a car that runs on batteries is a thing. So Tesla released the Superchargers in 2012, shipping 7 that year and growing slowly until now shipping over 2,500 per quarter. Musk took some hits because it took longer than anticipated to ship them, then to increase production, then to add solar. But at this point, many are solar and I keep seeing panels popping up above the cars to provide shade and offset other forms of powering the chargers. The more ubiquitous chargers become, the more accepting people will be of the cars.

Tesla needed to produce products faster. The Nevada Gigafactory was begun in 2013, to mass produce battery packs and components. Here’s one of the many reason for the high-flying valuation Tesla enjoys: it would take dozens if not a hundred factories like this to transition to sustanable energy sources. But it started with a co-investment between Tesla and Panasonic, with the two dumping billions into building a truly modern factory that’s now pumping out close tot he goal set back in 2014. As need increased, Gigafactories started to crop up with Gigafactory 5 being built to supposedly go into production in 2021 to build the Semi, Cybertruck (which should begin production in 2021) and Model Y. Musk first mentioned the truck in 2012 and projected a 2018 or 2019 start time for production. Close enough. 

Another aspect of all that software is that they can get updates over the air. Tesla released Autopilot in 2014. Similar to other attempts to slowly push towards self-driving cars, Autopilot requires the driver to stay alert, but can take on a lot of the driving - staying within the lines on the freeway, parking itself, traffic-aware cruise control, and navigation. But it’s still the early days for self-driving cars and while we make think that because the number of integrated circuits doubles every year that it paves the way to pretty much anything, no machine learning project I’ve ever seen has gone as fast as we want because it takes years to build the appropriate algorithms and then rethink industries based on the impact of those. But Tesla, Google through Waymo, and  many others have been working on it for a long time (hundreds of years in startup-land) and it continues to evolve.

By 2015, Tesla had sold over 100,000 cars in the life of the company. They released the Model X that year, also in 2015. This was their first chance to harness the power of the platform - which in the auto industry is when there are multiple cars of similar size and build. Franz von Holzhausen designed it and it is a beautiful car, with falcon-wing doors, up to a 370 mile range on the battery and again with the Autopilot. But harnessing the power of the platform was a challenge. You see, with a platform of cars you want most of the parts to be shared - the differences are often mostly cosmetic. But the Model X only shared a little less than a third of the parts of the Model S. 

But it’s yet another technological marvel, with All Wheel Drive as an option, that beautiful screen, and check this out - a towing capacity of 5,000 pounds - for an electric automobile!

By the end of 2016, they’d sold over 25,000. To a larger automaker that might seem like nothing, but they’d sell over 10,000 in every quarter after that. And it would also become the platform for a mini-bus. Because why not. So they’d gone lateral in the secret plan but it was time to get back at it. This is where the Model 3 comes in. 

The Model 3 was released in 2017 and is now the best-selling electric car in the history of the electric car. The Model 3 was first shown off in 2016 and within a week, Tesla had taken over 300,000 reservations. Everyone I talked to seemed to want in on an electric car that came in at $35,000. This was the secret plan. That $35,000 model wouldn’t be available until 2019 but they started cranking them out. Production was a challenge with Musk famously claiming Tesla was in “Production Hell” and sleeping on an air mattress at the factory to oversee the many bottlenecks that came. Musk thought they could introduce more robotics than they could and so they’ slowly increased production to first a few hundred per week then a few thousand until finally almost hitting that half a million mark in 2020.

This required buying Grohmann Engineering in 2017, now called Tesla Advanced Automation Germany - pumping billions into production. But Tesla added the Model Y in 2020, launching a crossover on the Model 3 platform, producing over 450,000 of them. And then of course they decided to the Tesla Semi, selling for between $150,000 and $200,000. And what’s better than a Supercharger to charge those things? A Megacharger. As is often the case with ambitious projects at Tesla, it didn’t ship in 2020 as projected but is now supposed to ship, um, later.

Tesla also changed their name from Tesla Motors to Tesla, Inc. And if you check out their website today, solar roofs and solar panels share the top bar with the Models S, 3, X, and Y. SolarCity and batteries, right?

Big money brings big attention. Some good. Some bad. Some warranted. Some not. Musk’s online and sometimes nerd-rockstar persona was one of the most valuable assets at Tesla - at least in the fundraising, stock pumping popularity contest that is the startup world. But on August 7, 2018, he tweeted “Am considering taking Tesla private at $420. Funding secured.” The SEC would sue him for that, causing him to step down as chairman for a time and limit his Twitter account. But hey, the stock jumped up for a bit. 

But Tesla kept keeping on, slowly improving things and finally hit about the half million cars per year mark in 2020. Producing cars has been about quality for a long time. And it needs to be with people zipping around as fast as we drive - especially on modern freeways. Small batches of cars are fairly straight-forward. Although I could never build one. 

The electric car is good for the environment, but the cost to offset carbon for Tesla is still far greater than, I don’t know, making a home more energy efficient. But the improvements in the technology continue to increase rapidly with all this money and focus being put on them. And the innovative designs that Tesla has deployed has inspired others, which often coincides with the rethinking of entire industries. 

But there are tons of other reasons to want electric cars. The average automobile manufactured these days has about 30,000 parts. Teslas have less than a third of that. One hopes that will some day be seen in faster and higher quality production. 

They managed to go from producing just over 18,000 cars in 2015 to over 26,000 in 2016 to over 50,000 in 2017 to the 190,000s in 2018 and 2019 to a whopping 293,000 in 2020. But they sold nearly 500,000 cars in 2020 and seem to be growing at a fantastic clip. Here’s the thing, though. Ford exceeded half a million cars in 1916. It took Henry Ford from 1901 to 1911 to get to producing 34,000 cars a year but only 5 more years to hit half a million. I read a lot of good and a lot of bad things about Tesla. Ford currently has a little over a 46 and a half billion dollar market cap. Tesla’s crested at nearly $850 billion and has since dropped to just shy of 600.

Around 64 million cars are sold each year. Volkswagen is the top, followed by Toyota. Combined, they are worth less than Tesla on paper despite selling over 20 times the number of cars. If Tesla was moving faster, that might make more sense. But here’s the thing. Tesla is about to get besieged by competitors at every side. Nearly every category of car has an electric alternative with Audi, BMW, Volvo, and Mercedes releasing cars at the higher ends and on multiple platforms. Other manufacturers are releasing cars to compete with the upper and lower tiers of each model Tesla has made available. And miniature cars, scooters, bikes, air taxis, and other modes of transportation are causing us to rethink the car. And multi-tenancy of automobiles using ride sharing apps and the potential that self driving cars can have on that are causing us to rethink automobile ownership. 

All of this will lead some to rethink that valuation Tesla enjoyed. But watching the moves Tesla makes and scratching my head over some certainly makes me think to never under, or over-estimate Tesla or Musk. I don’t want anything to do with Tesla Stock. Far too weird for me to grok. But I do wish them the best. I highly doubt the state of electric vehicles and the coming generational shifts in transportation in general would be where they are today if Tesla hadn’t done all the good and bad that they’ve done. They deserve a place in the history books when we start looking back at the massive shifts to come. In the meantime, I’l’ just call this episode part 1 and wait to see if Tesla matches Ford production levels some day, crashes and burns, gets acquired by another company, or who knows, packs up and heads to Mars. 

PayPal Was Just The Beginning


We can look around at distributed banking, crypto-currencies, Special Purpose Acquisition Companies, and so many other innovative business strategies as new and exciting and innovative. And they are. But paving the way for them was simplifying online payments to what I’ve heard Elon Musk call just some rows in a database. 

Peter Thiel, Max Levchin, and former Netscaper Luke Nosek had this idea in 1998. Levchin and Nosek has worked together on a startup called SponsorNet New Media while at the University of Illinois Champagne-Urbana where PLATO and Mosaic had come out of. And SponsorNet was supposed to sell online banner ads but would instead be one of four failed startups before zeroing in on this new thing, where they would enable digital payments for businesses and make it simple for consumers to buy things online. They called the company Confinity and setup shop in beautiful Mountain View, California.

It was an era when a number of organizations were doing things in taking payments online that weren’t so great. Companies would cache credit card numbers on sites, many had weak security, and the rush to sell everything  in the bubble forming around dot-coms fueled a knack for speed over security, privacy, or even reliability. 

Confinity would store the private information in its own banking vaults, keep it secure, and provide access to vendors - taking a small charge per-transaction. Where large companies had been able to build systems to take online payments, now small businesses and emerging online stores could compete with the big boys. Thiel and Levchin had hit on something when they launched a service called PayPal, to provide a digital wallet and enable online transactions. They even accepted venture funding, taking $3 million from banks like Deutsche Bank over Palm Pilots. One of those funders was Nokia, investing in PayPal expanding into digital services for the growing mobile commerce market. And by 2000 they were up to 1,000,000 users. 

They saw an opening to make a purchase from a browser on a phone or a browser or app on a cell phone using one of those new smart phone ideas. And they were all rewarded with over 10 million people using the site in just three short years, processing a whopping $3 billion in transactions. 

Now this was the heart of the dot-com bubble. In that time, Elon Musk managed to sell his early startup Zip2, which made city guides on the early internet, to Compaq for around $300 million, pocketing $22 million for himself. He parlayed that payday into, another online payment company. exploded to over 200,000 customers quickly and as happens frequently with rapid acceleration, a young Musk found himself with a new boss - Bill Harris, the former CEO of Intuit. 

And they helped invent many of the ways we do business online at that time. One of my favorite of Levchin’s contributions to computing, the Gausebeck-Levchin test, is one of the earliest implementations of what we now call CAPTCHA - you know when you’re shown a series of letters and asked to type them in to eliminate bots. 

Harris helped the investors de-risk by merging with Confinity to form Peter Thiel and Elon Musk are larger than life minds in Silicon Valley. The two were substantially different. Musk took on the CEO role but Musk and Thiel were at heads. Thiel believed in a Linux ecosystem and Musk believed in a Windows ecosystem. Thiel wanted to focus on money transfers, similar to the PayPal of today. Given that those were just rows in a database, it was natural that that kind of business would become a red ocean and indeed today there are dozens of organizations focused on it. But Paypal remains the largest. So Musk also wanted to become a full online banking system - much more ambitious. Ultimately Thiel won and assumed the title of CEO. 

They remained a money transmitter and not a full bank. This means they keep funds that have been sent and not picked up, in an interest bearing account at a bank. 

They renamed the company to PayPal in 2001 and focused on taking the company public, with an IPO as PYPL in 2002. The stock shot up 50% in the first day of trading, closing at $20 per share. Yet another example of the survivors of the dot com bubble increasing the magnitude of valuations. By then, most eBay transactions accepted PayPal and seeing an opportunity, eBay acquired PayPal for $1.5 billion later in 2002. Suddenly PayPal was the default option for closed auctions and would continue their meteoric rise. Musk is widely reported to have made almost $200 million when eBay bought PayPal and Thiel is reported to have made over $50 million. 

Under eBay, PayPal would grow and as with most companies that IPO, see a red ocean form in their space. But they brought in people like Ken Howery, who serve as the VP of corporate development, would later cofound investment firm Founders Fund with Thiel, and then become the US Ambassador to Sweden under Trump. And he’s the first of what’s called the PayPal Mafia, a couple dozen extremely influential personalities in tech. 

By 2003, PayPal had become the largest payment processor for gambling websites. Yet they walked away from that business to avoid some of the complicated regulations until various countries that could verify a license for online gambling venues. 

In 2006 they added security keys and moved to sending codes to phones for a second factor of security validation. In 2008 they bought Fraud Sciences to gain access to better online risk management tools and Bill Me Later.

As the company grew, they setup a company in the UK and began doing business internationally. They moved their EU presence to Luxembourg 2007. They’ve often found themselves embroiled in politics, blocking the any political financing accounts, Alex Jones show InfoWars, and one of the more challenging for them, WikiLeaks in 2010. This led to them being attacked by members of Anonymous for a series of denial of service attacks that brought the PayPal site down.

OK, so that early CAPTCHA was just one way PayPal was keeping us secure. It turns out that moving money is complicated, even the $3 you paid for that special Golden Girls t-shirt you bought for a steal on eBay. For example, US States require reporting certain transactions, some countries require actual government approval to move money internationally, some require a data center in the country, like Turkey. So on a case-by-case basis PayPal has had to decide if it’s worth it to increase the complexity of the code and spend precious development cycles to support a given country. In some cases, they can step in and, for example, connect the Baidu wallet to PayPal merchants in support of connecting China to PayPal. 

They were spun back out of eBay in 2014 and acquired Xoom for $1 billion in 2015, iZettle for $2.2 billion, who also does point of sales systems. And surprisingly they bought online coupon aggregator Honey for $4B in 2019. But their best acquisition to many would be tiny app payment processor Venmo for $26 million. I say this because a friend claimed they prefer that to PayPal because they like the “little guy.”

Out of nowhere, just a little more than 20 years ago, the founders of PayPal and they and a number of their initial employees willed a now Fortune 500 company into existence. While they were growing, they had to learn about and understand so many capital markets and regulations. This sometimes showed them how they could better invest money. And many of those early employees went on to have substantial impacts in technology. That brain drain helped fuel the Web 2.0 companies that rose. 

One of the most substantial ways was with the investment activities. Thiel would go on to put $10 million of his money into Clarium Capital Management, a hedge fund, and Palantir, a big data AI company with a focus on the intelligence industry, who now has a $45 billion market cap. And he funded another organization who doesn’t at all use our big private data for anything, called Facebook. He put half a million into Facebook as an angel investor - an investment that has paid back billions. He’s also launched the Founders Fund, Valar Venture, and is a partner at Y Combinator, in capacities where he’s funded everyone from LinkedIn and Airbnb to Stripe to Yelp to Spotify, to SpaceX to Asana and the list goes on and on and on. 

Musk has helped take so many industries online. Why not just apply that startup modality to space - so launched SpaceX and to cars, so helped launch (and backed financially) Tesla and solar power, so launched Solar City and building tunnels so launched The Boring Company. He dabbles in Hyperloops (thus the need for tunnels) and OpenAI and well, whatever he wants. He’s even done cameos in movies like Iron Man. He’s certainly a personality. 

Max Levchin would remain the CTO and then co-found and become the CEO of Affirm, a public fintech company. 

David Sacks was the COO at PayPal and founded Yammer. Roelof Botha is the former CFO at PayPal who became a partner at Sequoia Capital, one of the top venture capital firms. Yishan Wong was an engineering manager at PayPal who became the CEO of Reddit.

Steve Chen left to join Facebook but hooked back up with Jawed Karim for a new project, who he studied computer science at the University of Illinois at Champaign-Urbana with. They were joined by Chad Hurley, who had created the original PayPal logo, to found YouTube. They sold it to Google for $1.65 billion in 2006. Hurley now owns part of the Golden State Warriors, the MLS Los Angeles team, and Leeds United.

Reid Hoffman was another COO at PayPal, who Thiel termed the “firefighter-in-chief” and left to found LinkedIn. After selling LinkedIn to Microsoft for over $26 billion he become a partner at venture capital firm, Greylock Partners. 

Jeremy Stoppelman and Russel Simmons co-founded Yelp with $1 million in funding from Max Levchin, taking the company public in 2011. And the list goes on.

PayPal paved the way for small transactions on the Internet. A playbook repeated in different parts of the sector by the likes of Square, Stripe, Dwolla, Due, and many others - including Apple Pay, Amazon Payments, and Google Wallet. We live in an era now, where practically every industry has been taken online. Heck, even cars. In the next episode we’ll look at just that, exploring the next steps in Elon Musk’s career after leaving PayPal. 

Playing Games and E-Learning on PLATO: 1960 to 2015


PLATO (Programmed Logic for Automatic Teaching Operations) was an educational computer system that began at the University of Illinois Champaign Urbana in 1960 and ran into the 2010s in various flavors. 

Wait, that’s an oversimplification. PLATO seemed to develop on an island in the corn fields of Champaign Illinois, and sometimes precedes, sometimes symbolizes, and sometimes fast-follows what was happening in computing around the world in those decades.

To put this in perspective - PLATO began on ILLIAC in 1960 - a large classic vacuum tube mainframe. Short for the Illinois Automatic Computer, ILLIAC was built in 1952, around 7 years after ENIAC was first put into production. As with many early mainframe projects PLATO 1 began in response to a military need. We were looking for new ways to educate the masses of veterans using the GI Bill. We had to stretch the reach of college campuses beyond their existing infrastructures.

Computerized testing started with mechanical computing, got digitized with the introduction of Scantron by IBM in 1935, and a number of researchers were looking to improve the consistency of education and bring in new technology to help with quality teaching at scale. The post-World War II boom did this for industry as well. Problem is, following the launch of Sputnik by the USSR in 1957, many felt the US began lagging behind in education. So grant money to explore solutions flowed and CERL was able to capitalize on grants from the US Army, Navy, and Air Force. By 1959, physicists at Illinois began thinking of using that big ILLIAC machine they had access to. Daniel Alpert recruited Don Bitzer to run a project, after false starts with educators around the campus.

Bitzer shipped the first instance of PLATO 1 in 1960. They used a television to show images, stored images in Raytheon tubes, and a make-shift keyboard designed for PLATO so users could provide input in interactive menus and navigate. They experimented with slide projectors when they realized the tubes weren’t all that reliable and figured out how to do rudimentary time sharing, expanding to a second concurrent terminal with the release of PLATO II in 1961.

Bitzer was a classic Midwestern tinkerer. He solicited help from local clubs, faculty, high school students, and wherever he could cut a corner to build more cool stuff, he was happy to move money and resources to other important parts of the system. This was the age of hackers and they hacked away. He inspired but also allowed people to follow their own passions. Innovation must be decentralized to succeed.

They created an organization to support PLATO in 1966 - as part of the Graduate College. CERL stands for the Computer-Based Education Research Laboratory (CERL). Based on early successes, they got more and more funding at CERL. Now that we were beyond a 1:1 ratio of users to computers and officially into Time Sharing - it was time for Plato III.

There were a number of enhancements in PLATO III. For starters, the system was moved to a CDC 1604 that CEO of Control Data William Norris donated to the cause - and expanded to allow for 20 terminals. But it was complicated to create new content and the team realized that content would be what drove adoption. This was true with applications during the personal computer revolution and then apps in the era of the App Store as well. One of many lessons learned first on PLATO. 

Content was in the form of applications that they referred to as lessons. It was a teaching environment, after all. They emulated the ILLIAC for existing content but needed more. People were compiling applications in a complicated language. Professors had day jobs and needed a simpler way to build content. So Paul Tenczar on the team came up with a language specifically tailored to creating lessons. Similar in some ways to BASIC, it was called TUTOR. 

Tenczar released the manual for TUTOR in 1969 and with an easier way of getting content out, there was an explosion in new lessons, and new features and ideas would flourish. We would see simulations, games, and courseware that would lead to a revolution in ideas. In a revolutionary time.

The number of hours logged by students and course authors steadily increased. The team became ever more ambitious. And they met that ambition with lots of impressive achievements.

Now that they were comfortable with the CDC 1604 they new that the new content needed more firepower. CERL negotiated a contract with Control Data Corporation (CDC) in 1970 to provide equipment and financial support for PLATO. Here they ended up with a CDC Cyber 6400 mainframe, which became the foundation of the next iteration of PLATO, PLATO IV.

PLATO IV  was a huge leap forward on many levels. They had TUTOR but with more resources could produce even more interactive content and capabilities. The terminals were expensive and not so scalable. So in preparation for potentially thousands of terminals in PLATO IV they decided to develop their own. 

This might seem a bit space age for the early 1970s, but what they developed was a touch flat panel plasma display. It was 512x512 and rendered 60 lines per second at 1260 baud. The plasma had memory in it, which was made possible by the fact that they weren’t converting digital signals to analog, as is done on CRTs. Instead, it was a fully digital experience. The flat panel used infrared to see where a user was touching, allowing users some of their first exposure to touch screens. This was a grid of 16 by 16 rather than 512 but that was more than enough to take them over the next decade.

The system could render basic bitmaps but some lessons needed more rich, what we might call today, multimedia. The Raytheon tubes used in previous systems proved to be more of a CRT technology but also had plenty of drawbacks. So for newer machines they also included a microfiche machine that produced images onto the back of the screen. 

The terminals were a leap forward. There were other programs going on at about the same time during the innovative bursts of PLATO, like the Dartmouth Time Sharing System, or DTSS, project that gave us BASIC instead of TUTOR. Some of these systems also had rudimentary forms of forums, such as EIES and the emerging BBS Usenet culture that began in 1973. But PLATO represented a unique look into the splintered networks of the Time Sharing age.

Combined with the innovative lessons and newfound collaborative capabilities the PLATO team was about to bring about something special. Or lots of somethings that culminated in more. One of those was Notes.

Talkomatic was created by Doug Brown and David R. Woolley in 1973. Tenczar asked the 17-year old Woolley to write a tool that would allow users to report bugs with the system. There was a notes file that people could just delete. So they added the ability for a user to automatically get tagged in another file when updating and store notes. He expanded it to allow for 63 responses per note and when opened, it showed the most recent notes. People came up with other features and so a menu was driven, providing access to System Announcements, Help Notes, and General Notes. 

But the notes were just the start. In 1973, seeing the need for even more ways to communicate with other people using the system, Doug Brown wrote a prototype for Talkomatic. Talkomatic was a chat program that showed when people were typing. Woolley helped Brown and they added channels with up to five people per channel. Others could watch the chat as well. It would be expanded and officially supported as a tool called Term-Talk. That was entered by using the TERM key on a console, which allowed for a conversation between two people. You could TERM, or chat a person, and then they could respond or mark themselves as busy. 

Because the people writing this stuff were also the ones supporting users, they added another feature, the ability to monitor another user, or view their screen. And so programmers, or consultants, could respond to help requests and help get even more lessons going. And some at PLATO were using ARPANET, so it was only a matter of time before word of Ray Tomlinson’s work on electronic mail leaked over, leading to the 1974 addition of personal notes, a way to send private mail engineered by Kim Mast.

As PLATO grew, the amount of content exploded. They added categories to Notes in 1975 which led to Group Notes in 1976, and comments and linked notes and the ability to control access.

But one of the most important innovations PLATO will be remembered for is games. Anyone that has played an educational game will note that school lessons and games aren’t always all that different. Since Rick Blomme had ported Spacewar! to PLATO in 1969 and added a two-player option, multi-player games had been on the rise. They made leader boards for games like Dogfight so players could get early forms of game rankings. Games like airtight and airace and Galactic Attack would follow those.

MUDs were another form of games that came to PLATO. Collosal Cave Adventure had come in 1975 for the PDP, so again these things were happening in a vacuum but where there were influences and where innovations were deterministic and found in isolation is hard to say. But the crawlers exploded on PLATO. We got Moria, Oubliette by Jim Schwaiger, Pedit5, crypt, dungeon, avatar, and drygulch. We saw the rise of intense storytelling, different game mechanics that were mostly inspired by Dungeons and Dragons, As PLATO terminals found their way in high schools and other universities, the amount of games and amount of time spent on those games exploded, with estimates of 20% of time on PLATO being spent playing games. 

PLATO IV would grow to support thousands of terminals around the world in the 1970s. It was a utility. Schools (and even some parents) leased lines back to Champagne Urbana and many in computing thought that these timesharing systems would become the basis for a utility model in computing, similar to the cloud model we have today. But we had to go into the era of the microcomputer to boomerang back to timesharing first. 

That microcomputer revolution would catch many, who didn’t see the correlation between Moore’s Law and the growing number of factories and standardization that would lead to microcomputers, off guard. Control Data had bet big on the mainframe market - and PLATO. CDC would sell mainframes to other schools to host their own PLATO instance. This is where it went from a timesharing system to a network of computers that did timesharing. Like a star topology. 

Control Data looked to PLATO as one form of what the future of the company would be. Here, he saw this mainframe with thousands of connections as a way to lease time on the computers. CDC took PLATO to market as CDC Plato. Here, schools and companies alike could benefit from distance education. And for awhile it seemed to be working. Financial companies and airlines bought systems and the commercialization was on the rise, with over a hundred PLATO systems in use as we made our way to the middle of the 1980s. Even government agencies like the Depart of Defense used them for training. But this just happened to coincide with the advent of the microcomputer.

CDC made their own terminals that were often built with the same components that would be found in microcomputers but failed to capitalize on that market. Corporations didn’t embrace the collaboration features and often had these turned off. Social computing would move to bulletin boards And CDC would release versions of PLATO as micro-PLATO for the TRS-80, Texas Instruments TI-99, and even Atari computers. But the bureaucracy at CDC had slowed things down to the point that they couldn’t capitalize on the rapidly evolving PC industry. And prices were too high in a time when home computers were just moving from a hobbyist market to the mainstream. 

The University of Illinois spun PLATO out into its own organization called University Communications, Inc (or UCI for short) and closed CERL in 1994. That was the same year Marc Andreessen co-founded Mosaic Communications Corporation, makers of Netscape -successor to NCSA Mosaic. Because NCSA, or The National Center for Supercomputing Applications, had also benefited from National Science Foundation grants when it was started in 1982. And all those students who flocked to the University of Illinois because of programs like PLATO had brought with them more expertise.

UCI continued PLATO as NovaNet, which was acquired by National Computer Systems and then Pearson corporation, finally getting shut down in 2015 - 55 years after those original days on ILLIAC. It evolved from the vacuum tube-driven mainframe in a research institute with one terminal to two terminals, to a transistorized mainframe with hundreds and then over a thousand terminals connected from research and educational institutions around the world. It represented new ideas in programming and programming languages and inspired generations of innovations. 

That aftermath includes:

  • The ideas. PLATO developers met with people from Xerox PARC starting in the 70s and inspired some of the work done at Xerox. Yes, they seemed isolated at times but they were far from it. They also cross-pollinated ideas to Control Data. One way they did this was by trading some commercialization rights for more mainframe hardware. 
  • One of the easiest connections to draw from PLATO to the modern era is how the notes files evolved. Ray Ozzie graduated from Illinois in 1979 and went to work for Data General and then Software Arts, makers of VisiCalc. The corporate world had nothing like the culture that had evolved out of the notes files in PLATO Notes. Today we take collaboration tools for granted but when Ozzie was recruited by Lotus, the makers of 1-2-3, he joined only if they agreed to him funding a project to take that collaborative spirit that still seemed stuck in the splintered PLATO network. The Internet and networked computing in companies was growing, and he knew he could improve on the notes files in a way that companies could take use of it. He started Iris Associates in 1984 and shipped a tool in 1989. That would evolve into what is would be called Lotus Notes when the company was acquired by Lotus in 1994 and then when Lotus was acquired by IBM, would evolve into Domino - surviving to today as HCL Domino. Ozzie would go on to become a CTO and then the Chief Software Architect at Microsoft, helping spearhead the Microsoft Azure project.
  • Collaboration. Those notes files were also some of the earliest newsgroups. But they went further. Talkomatic introduced real time text chats. The very concept of a digital community and its norms and boundaries were being tested and challenges we still face like discrimination even manifesting themselves then. But it was inspiring and between stints at Microsoft, Ray Ozzie founded Talko in 2012 based on what he learned in the 70s, working with Talkomatic. That company was acquired by Microsoft and some of the features ported into Skype. 
  • Another way Microsoft benefited from the work done on PLATO was with Microsoft Flight Simulator. That was originally written by Bruce Artwick after leaving the university based on the flight games he’d played on PLATO. 
  • Mordor: The Depths of Dejenol was cloned from Avatar
  • Silas Warner was connected to PLATO from terminals at the University of Indiana. During and after school, he wrote software for companies but wrote Robot War for PLATO and then co-founded Muse Software where he wrote Escape!, a precursor for lots of other maze runners, and then Castle Wolfenstein. The name would get bought for $5,000 after his company went bankrupt and one of the early block-buster first-person shooters when released as Wolfenstein 3D. Then John Carmack and John Romero created Doom. But Warner would go on to work with some of the best in gaming, including Sid Meier.  
  • Paul Alfille built the game Freecell for PLATO and Control Data released it for all PLATO systems. Jim Horne played it from the PLATO terminals at the University of Alberta and eventually released it for DOS in 1988. Horn went to work for Microsoft who included it in the Microsoft Entertainment Pack, making it one of the most popular software titles played on early versions of Windows. He got 10 shares of Microsoft stock in return and it’s still part of Windows 10 using the Microsoft Solitaire Collection..
  • Robert wood head and Andrew Greenberg got onto PLATO from their terminals at Cornell University where they were able to play games like Oubliette and Emprie. They would write a game called Wizardry that took some of the best that the dungeon crawl multi-players had to offer and bring them into a single player computer then console game. I spent countless hours playing Wizardry on the Nintendo NES and have played many of the spin-offs, which came as late as 2014. Not only did the game inspire generations of developers to write dungeon games, but some of the mechanics inspired features in the Ultima series, Dragon Quest, Might and Magic, The Bard’s Tale, Dragon Warrior and countless Manga. Greenberg would go on to help with Q-Bert and other games before going on to work with the IEEE. Woodhead would go on to work on other games like Star Maze. I met Woodhead shortly after he wrote Virex, an early anti-virus program for the Mac that would later become McAfee VirusScan for the Mac.
  • Paul Tenczar was in charge of the software developers for PLATO. After that he founded Computer Teaching Corporation and introduced EnCORE, which was changed to Tencore. They grew to 56 employees by 1990 and ran until 2000. He returned to the University of Illinois to put RFID tags on bees, contributing to computing for nearly 5 decades and counting. 
  • Michael Allen used PLATO at Ohio State University before looking to create a new language. He was hired at CDC where he became a director in charge of Research and Development for education systems There, he developed the ideas for a new computer language authoring system, which became Authorware, one of the most popular authoring packages for the Mac. That would merge with Macro-Mind to become Macromedia, where bits and pieces got put into Dreamweaver and Shockwave as they released those. After Adobe acquired Macromedia, he would write a number of books and create even more e-learning software authoring tools. 


So PLATO gave us multi-player games, new programming languages, instant messaging, online and multiple choice testing, collaboration forums, message boards, multiple person chat rooms, early rudimentary remote screen sharing, their own brand of plasma display and all the research behind printing circuits on glass for that, and early research into touch sensitive displays. And as we’ve shown in just a few of the many people that contributed to computing after, they helped inspire an early generation of programmers and innovators. 

If you like this episode I strongly suggest checking out The Friendly Orange Glow from Brian Dear. It’s a lovely work with just the right mix of dry history and flourishes of prose. A short history like this can’t hold a candle to a detailed anthology like Dear’s book. 

Another well researched telling of the story can be found in a couple of chapters of A People’s History Of Computing In The United States, from Joy Rankin. She does a great job drawing a parallel (and sometimes direct line from) the Dartmouth Time Sharing System and others as early networks. And yes, terminals dialing into a mainframe and using resources over telephone and leased lines was certainly a form of bridging infrastructures and seemed like a network at the time. But no mainframe could have scaled to the ability to become a utility in the sense that all of humanity could access what was hosted on it. 

Instead, the ARPANET was put online and growing from 1969 to 1990 and working out the hard scientific and engineering principals behind networking protocols gave us TCP/IP. In her book, Rankin makes great points about the BASIC and TUTOR applications helping shape more of our modern world in how they inspired the future of how we used personal devices once connected to a network. The scientists behind ARPANET, then NSFnet and the Internet, did the work to connect us. You see, those dial-up connections were expensive over long distances. By 1974 there were 47 computers connected to the ARPANET and by 1983 we had TCP/IPv4.And much like Bitzer allowing games, they didn’t seem to care too much how people would use the technology but wanted to build the foundation - a playground for whatever people wanted to build on top of it.

So the administrative and programming team at CERL deserve a lot of credit. The people who wrote the system, the generations who built features and code only to see it become obsolete came and went - but the compounding impact of their contributions can be felt across the technology landscape today. Some of that is people rediscovering work done at CERL, some is directly inspired, and some has been lost only to probably be rediscovered in the future.  One thing is for certain, their contributions to e-learning are unparalleled with any other system out there. And their technical contributions, both in the form of those patented and those that were either unpatentable or where they didn’t think of patenting, are immense. 

Bitzer and the first high schoolers and then graduate students across the world helped to shape the digital world we live in today. More from an almost sociological aspect than technical. And the deep thought applied to the system lives on today in so many aspects of our modern world. Sometimes that’s a straight line and others it’s dotted or curved. Looking around, most universities have licensing offices now, to capitalize on the research done. Check out a university near you and see what they have available for license. You might be surprised. As I’m sure many in Champagne were after all those years. Just because CDC couldn’t capitalize on some great research doesn’t mean we can’t. 

So Long, Fry's Electronics


We’ve covered Radioshack but there are a few other retail stores I’d like to cover as well. CompUSA, CircuitCity, and Fry’s to name a few. Not only is there something to be learned from the move from brick and mortar electronic chains to Ecommerce but there’s plenty to be learned about how to treat people and how people perceived computers and what we need and when, as well. 

You see, Fry’s was one of the few places you could walk in, pick a CPU, find a compatible mother board, pick a sweet chassis to put it in, get a power supply, a video card, some memory, back then probably a network card, maybe some sweet fans, a cooling system for the CPU you were about to overclock, an SSD drive to boot a machine, a hard drive to store stuff, a DVD, a floppy just in case, pick up some velcro wrap to keep the cables at bay, get a TV, a cheap knockoff smart watch, a VR headset that would never work, maybe a safe since you already have a cart, a soundbar ‘cause you did just get a TV, some headphones for when you’ll keep everyone else up with the sounder, a couple of resistors for that other project, a fixed frequency video card for that one SGI in the basement, a couple smart plugs, a solar backpack, and a CCNA book that you realize is actually 2 versions out of date when you go to take the test. Yup, that was a great trip. And ya’ there’s also a big bag of chips and a 32 ounce of some weird soda gonna’ go in the front seat with me. Sweet. Now let’s just toss the cheap flashlight we just bought into the glove box in case we ever break down and we’re good to go home and figure out how to pay for all this junk on that new Fry’s Credit Card we just opened. 

But that was then and this is now. Fry’s announced it was closing all of its stores on February 24th, 2021. The week we’re recording this episode. To quote the final their website:

“After nearly 36 years in business as the one-stop-shop and online resource for high-tech professionals across nine states and 31 stores, Fry’s Electronics, Inc. (“Fry’s” or “Company”), has made the difficult decision to shut down its operations and close its business permanently as a result of changes in the retail industry and the challenges posed by the Covid-19 pandemic. The Company will implement the shut down through an orderly wind down process that it believes will be in the best interests of the Company, its creditors, and other stakeholders.

The Company ceased regular operations and began the wind-down process on February 24, 2021. It is hoped that undertaking the wind-down through this orderly process will reduce costs, avoid additional liabilities, minimize the impact on our customers, vendors, landlords and associates, and maximize the value of the Company’s assets for its creditors and other stakeholders.”

Wow. Just wow. I used to live a couple of miles from a Fry’s and it was a major part of furthering my understanding of arcane, bizarre, sometimes emergent, and definitely dingy areas of computing. And if those adjectives don’t seem to have been included lovingly, they most certainly are. You see every trip to Fry’s was strange. 

Donald Fry founded Fry’s Food and Drug in 1954. The store rose to prominence in the 50s and 60s until his brother Charles Fry sold it off in 1972. As a part of Kroger it still exists today, with 22,000 employees. But this isn’t the story of a supermarket chain. I guess I did initially think the two were linked because the logos look somewhat similar - but that’s where their connection ends. 

Instead, let’s cover what happened to the $14 million the family got from the sale of the chain. Charles Fry gave some to his sons John, Randy, and David. They added Kathryn Kolder and leased a location in Sunnyvale, California to open the first Fry’s Electronics store in 1985.

This was during the rise of the microcomputer. The computing industry had all these new players who were selling boards and printers and floppy drives. They put all this stuff in bins kinda’ like you would in a grocery store and became a one-stop shop for the hobbyist and the professional alike. Unlike groceries, the parts didn’t expire so they were able to still have things selling 5 or 10 years later, albeit a bit dusty. 

1985 was the era when many bought integrated circuits, mother boards, and soldering irons and built their own computers. They saw the rise of the microprocessor, the 80286 and x86s. And as we moved into an era of predominantly x86 clones of the IBM PC, the buses and cards became standard. Provided a power supply had a molex connector it was probably good to light up most mother boards and hard drives. The IDE became the standard then later SATA. But parts were pretty interchangeable.

Knowing groceries, they also sold those. Get some Oranges and a microprocessor. They stopped selling those but always sold snacks until the day they closed down. But services were always a thing at Fry’s. Those who didn’t want to spend hours putting spacers on a motherboard and puttin

They also sold other electronics. Sometimes the selection seemed totally random. I bought my first MP3 player at a Fry’s - the Diamond Rio. And funny LED lights for computer fans before that really became a thing. Screwdriver kits, thermal grease, RAM chips, unsoldered boards, weird little toys, train sets, coloring books, certification books for that MCSE test I took in 2002, and whatever else I could think of. 

The stores were kitchy. Some had walls painted like circuit boards. Some had alien motifs. Others were decorated like the old west. It’s like whatever they could find weird stuff to adorn the joint. People were increasingly going online. In 1997 they bought To help people get online, they started selling Internet access in 2000. But by then there were so many vendors to help people get online that it wasn’t going to be successful. People were increasingly shopping online so they bought Cyberian Outpost in 2001 and moved it to - which later just pointed to 

The closing of a number of Radio Shack stores and Circuit City and CompUSA seemed to give them a shot in the arm for a bit. But you could buy computers at Gateway Country or through Dell. Building your own computer was becoming more and more a niche industry for gamers and others who needed specific builds. 

They grew to 34 stores at their height. Northern California stores in Campbell, Concord, Fremont, Roseville, Sacramento, San Jose, and that original Sunnyvale (now across the street from the old original Sunnyvale) and Southern California stores in Burbank, City of Industry, Fountain Valley, Manhattan Beach, Oxnard, San Diego, San Marcos, and the little one in Woodland Hills  - it seemed like everyone in California knew to go to Fry’s when you needed some doodad. In fact, they made the documentary about General Magic because they were constantly going back and forth to Fry’s to get parts to build their device. 

But they did expand out of California with 8 stores in Texas, two in Airizona, one in Illinois, one in Indiana, one in Nevada, one in Oregon, and another in Washington. In some ways it looked as though they were about to have a chain that could rival the supermarket chain their dad helped build. But it wasn’t meant to be. 

With the fall of Radio Shack, CompUSA, and Circuit City, I was always surprised Fry’s stayed around. Tandy started a concept similar called Incredible Universe but that didn’t last too long. But I loved them. The customer service wasn’t great. The stores were always a little dirty. But I never left empty-handed. Even when I didn’t find what I was looking for. 

Generations of computer enthusiasts bought everything from scanners to printers at Frys. They were sued over how they advertised, for sexual harassment, during divorce settlements, and over how they labeled equipment. They lost money in embezzlements, and as people increasingly turned to Amazon and other online vendors for the best price for that MSI motherboard or a screen for the iPhone - keeping such a massive inventory was putting them out of business. So in 2019 amidst rumors they were about to go out of business, they moved to stocking the stores via consignment. Not all vendors upstream could do that, leading to an increasingly strange selection and finding what you needed less and less. 

Then came COVID. They closed a few stores and between the last ditch effort of consignment and empty bins as hardware moved, they just couldn’t do it any more. As with the flashier and less selection but more complete systems Circuit City and CompUSA before them, they finally closed their doors in 2021, after 36 years. And so we live in an era where many computers, tablets, and phones are no longer serviceable or have parts that can be swapped out. We live in an era where when we can service a device with those parts, we often go online to source them. And we live in an era where if we need instant gratification to replace components there are plenty of retail chains like Target or Walmart that sell components and move far more than Fry’s so are more competitive on the price. We live in an era where we don’t need to go into a retailer for software and books, both sold at high margins. There are stores on the Apple and Microsoft and Google platforms for that. And of course 2020 was a year that many retail chains had to close their doors in order to keep their employees safe, losing millions in revenue. 

All of that eventually became too much for other computer stores as each slowly eroded the business. And now it’s become too much for Fry’s. I will always remember the countless hours I strolled around the dingy store, palming this adapter and that cable and trying to figure out what components might fit together so I can get the equivalent of an AlienWare computer for half the cost. And I’ll even fondly remember the usually sub-par customer service, because it forced me to learn more. And I’ll always be thankful that they had crap sitting around for a decade because I always learned something new about the history of computers in their bins of arcane bits and bytes sitting around.

And their closing reminds us, as the closings of former competitors and even other stores like Borders does, that an incredible opportunity lies ahead of us. These shifts in society also shift the supply chain. They used to get a 50% markup on software and a hefty markup on the books I wrote. Now I can publish software on the App Stores and pay less of my royalties to the retailers. Now I don’t need a box and manual for software. Now books don’t have to be printed and can even be self-published in those venues if I see fit to do so. And while Microsoft, Apple, and Google’s “Services” revenue or revenue from Target once belonged to stores like Fry’s, the opportunities have moved to linking and aggregating and adding machine learning and looking to fields that haven’t yet been brought into a more digital age - or even to harkening back to simpler times and providing a more small town white glove approach to life. Just as the dot com crash created a field where companies like Netflix and Google could become early unicorns, so every other rise and fall creates new, uncharted green fields and blue oceans. Thank you for your contributions - both past and future.

Apple 1997-2011: The Return Of Steve Jobs


Steve Jobs left Apple in 1985. He co-founded NeXT Computers and took Pixar public. He then returned to Apple as the interim CEO in 1997 at a salary of $1 per year. Some of the early accomplishments on his watch were started before he got there. But turning the company back around was squarely on him and his team. 

By the end of 1997, Apple moved to a build-to-order manufacturing powered by an online store built on WebObjects, the NeXT application server. They killed off a number of models, simplifying the lineup of products and also killed the clone deals, ending licensing of the operating system to other vendors who were at times building sub-par products.

And they were busy. You could feel the frenetic pace.  They were busy at work weaving the raw components from NeXT into an operating system that would be called Mac OS X. They announced a partnership that would see Microsoft invest $150 million into Apple to settle patent disputes but that Microsoft would get Internet Explorer bundled on the Mac and give a commitment to release Office for the Mac again. By then, Apple had $1.2 billion in cash reserves again, but armed with a streamlined company that was ready to move forward - but 1998 was a bottoming out of sorts, with Apple only doing just shy of $6 billion in revenue. To move forward, they took a little lesson from the past and released a new all-in-one computer. One that put the color back into that Apple logo. Or rather removed all the colors but Aqua blue from it. 

The return of Steve Jobs invigorated many, such as Johnny Ive who is reported to have had a resignation in his back pocket when he met Jobs. Their collaboration led to a number of innovations, with a furious pace starting with the iMac. The first iMacs were shaped like gumdrops and the color of candy as well. The original Bondi blue had commercials showing all the cords in a typical PC setup and then the new iMac, “as unPC as you can get.” The iMac was supposed to be to get on the Internet. But the ensuing upgrades allowed for far more than that. 

The iMac put style back into Apple and even computers. Subsequent releases came in candy colors like Lime, Strawberry, Blueberry, Grape, Tangerine, and later on Blue Dalmatian and Flower Power. The G3 chipset bled out into other more professional products like a blue and white G3 tower, which featured a slightly faster processor than the beige tower G3, but a much cooler look - and very easy to get into compared to any other machine on the market at the time. And the Clamshell laptops used the same design language. Playful, colorful, but mostly as fast as their traditional PowerBook counterparts. 

But the team had their eye on a new strategy entirely. Yes, people wanted to get online - but these computers could do so much more. Apple wanted to make the Mac the Digital Hub for content. This centered around a technology that had been codeveloped from Apple, Sony, Panasonic, and others called IEEE 1394. But that was kinda’ boring so we just called it Firewire.

Begun in 1986 and originally started by Apple, Firewire had become a port that was on most digital cameras at the time. USB wasn’t fast enough to load and unload a lot of newer content like audio and video from cameras to computers. But I can clearly remember that by the year 1999 we were all living as Jobs put it in a “new emerging digital lifestyle.”  This led to a number of releases from Apple. One was iMovie. Apple included it with the new iMac DV model for free. That model dumped the fan (which Jobs never liked even going back to the early days of Apple) as well as FireWire and the ability to add an AirPort card. Oh, and they released an AirPort base station in 1999 to help people get online easily. It is still one of the simplest router and wi-fi devices I’ve ever used. And was sleek with the new Graphite design language that would take Apple through for years on their professional devices.

iMovie was a single place to load all those digital videos and turn them into something else. And there was another format on the rise, MP3. Most everyone I’ve ever known at Apple love music. It’s in the DNA of the company, going back to Wozniak and Jobs and their love of musicians like Bob Dylan in the 1970s. The rise of the transistor radio and then the cassette and Walkman had opened our eyes to the democratization of what we could listen to as humans. But the MP3 format, which had been around since 1993, was on the rise. People were ripping and trading songs and Apple looked at a tool called Audion and another called SoundJam and decided that rather than Sherlock (or build that into the OS) that they would buy SoundJam in 2000. The new software, which they called iTunes, allowed users to rip and burn CDs easily. Apple then added iPhoto, iWeb, and iDVD. For photos, creating web sites, and making DVDs respectively. The digital hub was coming together.

But there was another very important part of that whole digital hub strategy. Now that we had music on our computers we needed something more portable to listen to that music on. There were MP3 players like the Diamond Rio out there, and there had been going back to the waning days of the Digital Equipment Research Lab - but they were either clunky or had poor design or just crappy and cheap. And mostly only held an album or two. I remember walking down that isle at Fry’s about once every other month waiting and hoping. But nothing good ever came. 

That is, until Jobs and the Apple hardware engineering lead Job Rubinstein found Tony Fadell. He had been at General Magic, you know, the company that ushered in mobility as an industry. And he’d built Windows CE mobile devices for Philips in the Velo and Nino. But when we got him working with Jobs, Rubinstein, and Johnny Ive on the industrial design front, we got one of the most iconic devices ever made: the iPod. 

And the iPod wasn’t all that different on the inside from a Newton. Blasphemy I know. It sported a pair of ARM chips and Ive harkened back to simpler times when he based the design on a transistor radio. Attention to detail and the lack thereof in the Sony Diskman propelled Apple to sell more than 400 million  iPods to this day. By the time the iPod was released in 2001, Apple revenues had jumped to just shy of $8 billion but dropped back down to $5.3. But everything was about to change. And part of that was that the iPod design language was about to leak out to the rest of the products with white iBooks, white Mac Minis, and other white devices as a design language of sorts. 

To sell all those iDevices, Apple embarked on a strategy that seemed crazy at the time. They opened retail stores. They hired Ron Johnson and opened two stores in 2001. They would grow to over 500 stores, and hit a billion in sales within three years. Johnson had been the VP of merchandising at Target and with the teams at Apple came up with the idea of taking payment without cash registers (after all you have an internet connected device you want to sell people) and the Genius Bar. 

And generations of devices came that led people back into the stores. The G4 came along - as did faster RAM. And while Apple was updating the classic Mac operating system, they were also hard at work preparing NeXT to go across the full line of computers. They had been working the bugs out in Rhapsody and then Mac OS X Server, but the client OS, Codenamed Kodiak, went into beta in 2000 and then was released as a dual-boot option in Cheetah, in 2001. And thus began a long line of big cats, going to Puma then Jaguar in 2002, Panther in 2003, Tiger in 2005, Leopard in 2007, Snow Leopard in 2009, Lion in 2011, Mountain Lion in 2012 before moving to the new naming scheme that uses famous places in California. 

Mac OS X finally provided a ground-up, modern, object-oriented operating system. They built the Aqua interface on top of it. Beautiful, modern, sleek. Even the backgrounds! The iMac would go from a gumdrop to a sleek flat panel on a metal stand, like a sunflower. Jobs and Ive are both named on the patents for this as well as many of the other inventions that came along in support of the rapid device rollouts of the day. 

Jaguar, or 10.2, would turn out to be a big update. They added Address Book, iChat - now called Messages, and after nearly two decades replaced the 8-bit Happy Mac with a grey Apple logo in 2002. Yet another sign they were no longer just a computer company. Some of these needed a server and storage so Apple released the Xserve in 2002 and the Xserve RAID in 2003. The pro devices also started to transition from the grey graphite look to brushed metal, which we still use today. 

Many wanted to step beyond just listening to music. There were expensive tools for creating music, like ProTools. And don’t get me wrong, you get what you pay for. It’s awesome. But democratizing the creation of media meant Apple wanted a piece of software to create digital audio - and released Garage Band in 2004. For this they again turned to an acquisition, EMagic, which had a tool called Logic Audio. I still use Logic to cut my podcasts. But with Garage Band they stripped it down to the essentials and released a tool that proved wildly popular, providing an on-ramp for many into the audio engineering space. 

Not every project worked out. Apple had ups and downs in revenue and sales in the early part of the millennium. The G4 Cube was released in 2000 and while it is hailed as one of the greatest designs by industrial designers it was discontinued in 2001 due to low sales. But Steve Jobs had been hard at work on something new. Those iPods that were becoming the cash cow at Apple and changing the world, turning people into white earbud-clad zombies spinning those click wheels were about to get an easier way to put media into iTunes and so on the device. 

The iTunes Store was released in 2003. Here, Jobs parlayed the success at Apple along with his own brand to twist the arms of executives from the big 5 record labels to finally allow digital music to be sold online. Each song was a dollar. Suddenly it was cheap enough that the music trading apps just couldn’t keep up. Today it seems like everyone just pays a streaming subscription but for a time, it gave a shot in the arm to music companies and gave us all this new-found expectation that we would always be able to have music that we wanted to hear on-demand. 

Apple revenue was back up to $8.25 billion in 2004. But Apple was just getting started. The next seven years would see that revenue climb from to $13.9 billion in 2005, $19.3 in 2006, $24 billion in 2007, $32.4 in 2008, $42.9 in 2009, $65.2 in 2010, and a staggering $108.2 in 2011.

After working with the PowerPC chipset, Apple transitioned new computers to Intel chips in 2005 and 2006. Keep in mind that most people used desktops at the time and just wanted fast. And it was the era where the Mac was really open source friendly so having the ability to load in the best the Linux and Unix worlds had to offer for software inside projects or on servers was made all the easier. But Intel could produce chips faster and were moving faster. That Intel transition also helped with what we call the “App Gap” where applications written for Windows could be virtualized for the Mac. This helped the Mac get much more adoption in businesses.

Again, the pace was frenetic. People had been almost begging Apple to release a phone for years. The Windows Mobile devices, the Blackberry, the flip phones, even the Palm Treo. They were all crap in Jobs’ mind. Even the Rockr that had iTunes in it was crap. So Apple released the iPhone in 2007 in a now-iconic  Jobs presentation. The early version didn’t have apps, but it was instantly one of the more saught-after gadgets. And in an era where people paid $100 to $200 for phones it changed the way we thought of the devices. In fact, the push notifications and app culture and always on fulfilled the General Magic dream that the Newton never could and truly moved us all into an always-on i (or Internet) culture.

The Apple TV was also released in 2007. I can still remember people talking about Apple releasing a television at the time. The same way they talk about Apple releasing a car. It wasn’t a television though, it was a small whitish box that resembled a Mac Mini - just with a different media-browsing type of Finder. Now it’s effectively an app to bootstrap the media apps on a Mac. 

It had been a blistering 10 years. We didn’t even get into Pages, FaceTime, They weren’t done just yet. The iPad was released in 2010. By then, Apple revenues exceeded those of Microsoft. The return and the comeback was truly complete. 

Similar technology used to build the Apple online store was also used to develop the iTunes Store and then the App Store in 2008. Here, rather than go to a site you might not trust and download an installer file with crazy levels of permissions.

One place where it’s still a work in progress to this day was iTools, released in 2000 and rebranded to .Mac or dot Mac in 2008, and now called MobileMe. Apple’s vision to sync all of our data between our myriad of devices wirelessly was a work in progress and never met the lofty goals set out. Some services, like Find My iPhone, work great. Others notsomuch. Jobs famously fired the team lead at one point. And while it’s better than it was it’s still not where it needs to be. 

Steve Jobs passed away in 2011 at 56 years old. His first act at Apple changed the world, ushering in first the personal computing revolution and then the graphical interface revolution. He left an Apple that meant something. He returned to a demoralized Apple and brought digital media, portable music players, the iPhone, the iPad, the Apple TV, the iMac, the online music store, the online App Store, and so much more. The world had changed in that time, so he left, well, one more thing. You see, when they started, privacy and security wasn’t much of a thing. Keep in mind, computers didn’t have hard drives. The early days of the Internet after his return was a fairly save I or Internet world. But by the time he passed away there there were some troubling trends. The data on our phones and computers could weave together nearly every bit of our life to an outsider. Not only could this lead to identity theft but with the growing advertising networks and machine learning capabilities, the consequences of privacy breaches on Apple products could be profound as a society. He left an ethos behind to build great products but not at the expense of those who buy them. One his successor Tim Cook has maintained. 

On the outside it may seem like the daunting 10 plus years of product releases has slowed. We still have the Macbook, the iMac, a tower, a mini, an iPhone, an iPad, an Apple TV. We now have HomeKit, a HomePod, new models of all those devices, Apple silicon, and some new headphones - but more importantly we’ve had to retreat a bit internally and direct some of those product development cycles to privacy, protecting users, shoring up the security model. Managing a vast portfolio of products in the largest company in the world means doing those things isn’t always altruistic. Big companies can mean big law suits when things go wrong. These will come up as we cover the history of the individual devices in greater detail.

The history of computing is full of stories of great innovators. Very few took a second act. Few, if any, had as impactful a first act as either that Steve Jobs had. It wasn’t just him in any of these. There are countless people from software developers to support representatives to product marketing gurus to the people that write the documentation. It was all of them, working with inspiring leadership and world class products that helped as much as any other organization in the history of computing, to shape the digital world we live in today. 

From Moveable Type To The Keyboard


QWERTY. It’s a funny word. Or not a word. But also not an acronym per se. Those are the  top six letters in a modern keyboard. Why? Because the frequency they’re used allows for hammers on a traditional typewriter to travel to and fro and the effort allows us to be more efficient with our time while typing. The concept of the keyboard goes back almost as far back as moveable type - but it took hundreds of years to standardize where we are today. 

Johannes Gutenberg is credited for developing the printing press in the 1450s. Printing using wooden blocks was brought to the Western world from China, which led him to replace the wood or clay characters with metal, thus giving us what we now think of as Moveable Type. This meant we were now arranging blocks of characters to print words onto paper. From there it was only a matter of time that we would realize that pressing a key could stamp a character onto paper as we went rather than developing a full page and then pressing ink to paper.

The first to get credit for pressing letters onto paper using a machine was Venetian Francesco Rampazzetto in 1575. But as with many innovations, this one needed to bounce around in the heads of inventors until the appropriate level of miniaturization and precision was ready. Henry Mill filed an English patent in 1714 for a machine that could type (or impress) letters progressively. By then, printed books were ubiquitous but we weren’t generating pages of printed text on the fly just yet. 

Others would develop similar devices but from 1801 to 1810, Pellegrino Turri in Italy developed carbon paper. Here, he coated one side of paper with carbon and the other side with wax. Why did he invent that, other than to give us an excuse to say carbon copy later (and thus the cc in an email)? 

Either he or Agostino Fantoni da Fivizzano invented a mechanical machine for pressing characters to paper for Countess Carolina Fantoni da Fivizzano, a blind friend of his. She would go on to send him letters written on the device, some of which exist to this day. More inventors tinkered with the idea of mechanical writing devices, often working in isolation from one another.

One was a surveyor, William Austin Burt. He found the handwritten documents of his field laborious and so gave us the typographer in 1829. Each letter was moved to where needed to print manually so it wasn’t all that much faster than the handwritten document, but the name would be hyphenated later to form type-writer. And with precision increasing and a lot of invention going on at the time there were other devices. But his patent was signed by Andrew Jackson. 

James Pratt introduced his Pterotype in an article in the Scientific American in 1867. It was a device that more closely resembles the keyboard layout we know today, with 4 rows of keys and a split in the middle for hands. Others saw the article and continued their own innovative additions. 

Frank Hall had worked on the telegraph before the Civil War and used his knowledge there to develop a Braille writer, which functioned similarly to a keyboard. He would move to Wisconsin, where he came in contact with another team developing a keyboard.

Christopher Latham Sholes saw the article in the Scientific American and along with Carlos Glidden and Samuel Soule out of Milwaukee developed the QWERTY keyboard we know of as the standard keyboard layout today from 1867 to 1868. Around the same time, Danish pastor Rasmus Malling-Hansen introduced the writing ball in 1870. It could also type letters onto paper but with a much more complicated keyboard layout. It was actually the first typewriter to go into mass production - but at this point new inventions were starting to follow the QWERTY layout. Because asdfjkl;. Both though were looking to increase typing speed with Malling-Mansen’s layout putting constanents on the right side and vowels on the left - but Sholes and Glidden mixed keys up to help reduce the strain on hardware as it recoiled, thus splitting common characters in words between the sides. 

James Densmore encountered the Sholes work and jumped in to help. They had it relentlessly tested and iterated on the design, getting more and more productivity gains and making the device more and more hardy. When the others left the project, it was Densmore and Sholes carrying on. But Sholes was also a politician and editor of a newspaper, so had a lot going on. He sold his share of the patent for their layout for $12,000 and Densmore decided to go with royalties instead. 

By the 1880s, the invention had been floating around long enough and given a standardized keyboard it was finally ready to be mass produced. This began with the Sholes & Glidden Type Writer introduced in America in 1874. That was followed by the Caligraph. But it was Remington that would take the Sholes patent and create the Remington Typewriter, removing the hyphen from the word typewriter and going mainstream - netting Densmore a million and a half bucks in 1800s money for his royalties. And if you’ve seen anything typed on it, you’ll note that it supported one font: the monospaced sans serif Grotesque style.

Characters had always been upper case. Remington added a shift key to give us the ability to do both upper and lower case in 1878 with the Remington Model 2. This was also where we got the ampersand, parenthesis,  percent symbol, and question mark as shift characters for numbers. Remington also added tab and margins in 1897. Mark Twain was the first author to turn a manuscript in from a typewriter using what else but the Remington Typewriter. By then, we were experimenting with the sizes and spaces between characters, or kerning, to make typed content easier to read. Some companies moved to slab serif or Pica fonts and typefaces. You could really tell a lot about a company by that Olivetti with it’s modern, almost anti-Latin fonts. 

The Remington Typewriter Company would later merge with the Rand Kardex company to form Remington Rand, making typewriters, guns, and then in 1950, acquiring the Eckert-Mauchly Computer Corporation, who made ENIAC - arguably the first all-digital computer. Rand also acquired Engineering Research Associates (or ERA) and introduced the Univac. Electronics maker Sperry acquired them in 1955, and then merged with Burroughs to form Unisys in 1988, still a thriving company. But what’s important is that they knew typewriters. And keyboards.

But electronics had been improving in the same era that Remington took their typewriters mainstream, and before. Samuel Morse developed the recording telegraph in 1835 and David Hughes added the printed telegraph. Emile Baudot gave us a 5 bit code in the 1870s that enhanced that but those were still using keys similar to what you find on a piano. The typewriter hadn’t yet merged with digital communications just yet. Thomas Edison patented the electric typewriter in 1872 but didn’t produce a working model. And this was a great time of innovation. For example, Alexander Graham Bell was hard at work on patenting the telephone at the time. 

James Smathers then gave us the first electronic typewriter in 1920 and by the 1930s improved Baudot, or baud was combined with a QUERTY keyboard by Siemens and others to give us typing over the wire. The Teletype Corporation was founded in 1906 and would go from tape punch and readers to producing the teletypes that allowed users to dial into mainframes in the 1970s timesharing networks. But we’re getting ahead of ourselves. How did we eventually end up plugging a keyboard into a computer?

Herman Hollerith, the mind behind the original IBM punch cards for tabulating machines before his company got merged with others to form IBM, brought us text keypunches, which were later used to input data into early computers. The Binac computer used a similar representation with 8 keys and an electromechanical control was added to input data into the computer like a punch card might - for this think of a modern 10-key pad. Given that we had electronic typewriters for a couple of decades it was only a matter of time before a full keyboard worth of text was needed on a computer. That came in 1954 with the pioneering work done MIT. Here, Douglas Ross wanted to hookup a Flexowriter electric typewriter to a computer, which would be done the next year as yet another of the huge innovations coming out of the Whirlwind project at MIT. With the addition of core memory to computing that was the first time a real keyboard (and being able to write characters into a computer) was really useful. After nearly 400 years since the first attempts to build a moveable type machine and then and just shy of 100 years since the layout had been codified, the computer keyboard was born. 

The PLATO team in late 60s University of Illinois Champaign Urbana were one of many research teams that sought to develop cheaper input output mechanisms for their computer Illiac and prior to moving to standard keyboards they built custom devices with fewer keys to help students select multiple choice answers. But eventually they used teletype-esque systems. 

Those early keyboards were mechanical. They still made a heavy clanky sound when the keys were pressed. Not as much as when using a big mechanical typewriter, but not like the keyboards we use today. These used keys with springs inside them. Springs would be replaced with pressure pads in some machines, including the Sinclair ZX80 and ZX81. And the Timex Sinclair 1000. Given that there were less moving parts, they were cheap to make. They used conductive traces with a gate between two membranes. When a key was pressed electricity flowed through what amounted to a flip-flop. When the key was released the electricity stopped flowing. I never liked them because they just didn’t have that feel. In fact, they’re still used in devices like microwaves to provide for buttons under led lights that you can press. 

By the late 1970s, keyboards were becoming more and more common. The next advancement was in Chiclet keyboards, common on the TRS-80 and the IBM PCjr. These were were like membrane keyboards but used moulded rubber. Scissor switch keyboards became the standard for laptops - these involve a couple of pieces of plastic under each key, arranged like a scissor. And more and more keyboards were produced. 

With an explosion in the amount of time we spent on computers, we eventually got about as many designs of ergonomic keyboards as you can think of. Here, doctors or engineers or just random people would attempt to raise or lower hands or move hands apart or depress various keys or raise them. But as we moved from desktops to laptops or typing directly on screens as we do with tablets and phones, those sell less and less.

I wonder what Sholes would say if you showed him and the inventors he worked with what the QWERTY keyboard looks like on an iPhone today? I wonder how many people know that at least two of the steps in the story of the keyboard had to do with helping the blind communicate through the written word? I wonder how many know about the work Alexander Graham Bell did with the deaf and the impact that had on his understanding of the vibrations of sound and the emergence of phonautograph to record sound and how that would become acoustic telegraphy and then the telephone, which could later stream baud? Well, we’re out of time for today so that story will have to get tabled for a future episode.

In the meantime, look around for places where there’s no standard. Just like the keyboard layout took different inventors and iterations to find the right amount of productivity, any place where there’s not yet a standard just needs that same level of deep thinking and sometimes generations to get it perfected. But we still use the QWERTY layout today and so sometimes once we find the right mix, we’ve set in motion an innovative that can become a true game changer. And if it’s not ready, at least we’ve contributed to the evolutions that revolutionize the world. Even if we don’t use those inventions. Bell famously never had a phone installed in his office. Because distractions. Luckily I disabled notifications on my phone before recording this or it would never get out… 

Apple and NeXT Computer


Steve Jobs had an infamous split with the board of directors of Apple and left the company shortly after the release of the original Mac. He was an innovator who at 21 years old had started Apple in the garage with Steve Wozniak and at 30 years old while already plenty wealthy felt he still had more to give and do. We can say a lot of things about him but he was arguably one of the best product managers ever. 

He told Apple he’d be taking some “low-level staffers” and ended up taking Rich Page, Bud Tribble, Dan'l Lewin, George Crow, and Susan Barnes to be the CFO. They also took Susan Kare and Joanna Hoffman. had their eyes on a computer that was specifically targeting higher education. They wanted to build computers for researchers and universities. 

Companies like CDC and Data General had done well in Universities. The team knew there was a niche that could be carved out there. There were some gaps with the Mac that made it a hard sell in research environments. Computer scientists needed object-oriented programming and protected memory. Having seen the work at PARC on object-oriented languages, Jobs knew the power and future-proof approach. 

Unix System V had branched a number of times and it was a bit more of a red ocean than I think they realized. But Jobs put up $7 million of his own money to found NeXT Computer. He’d add another $5 million and Ross Perot would add another $20 million. The pay bands were one of the most straight-forward of any startup ever founded. The senior staff made $75,000 and everyone else got $50,000. Simple. 

Ironically, so soon after the 1984 Super Bowl ad where Jobs based IBM, they hired the man who designed the IBM logo, Paul Rand, to design a logo for NeXT. They paid him $100,000 flat. Imagine the phone call when Jobs called IBM to get them to release Rand from a conflict of interest in working with them. 

They released the first computer in 1988. The NeXT Computer, as it was called, was expensive for the day, coming in at $6,500. It sported a Motorola 68030 CPU and clocked in at a whopping 25 MHz. And it came with a special operating system called NeXTSTEP.

NeXTSTEP was based on the Mach kernel with some of the source code coming from BSD. If we go back a little, Unix was started at Bell Labs in 1969 and by the late 70s had been forked from Unix System V to BSD, Unix version 7, and PWB - with each of those resulting in other forks that would eventually become OpenBSD, SunOS, NetBSD, Solaris, HP-UX, Linux, AIX, and countless others. 

Mach was developed at Carnegie Mellon University and is one of the earliest microkernels. For Mach, Richard Rashid (who would later found Microsoft Research) and Avie Tevanian, were looking specifically to distributed computing. And the Mach project was kicked off in 1985, the same year Jobs left Apple. 

Mach was backwards-compatible to BSD 4.2 and so could run a pretty wide variety of software. It allowed for threads, or units of execution and tasks or objects that enabled threads. It provided support for messages, which for object oriented languages are typed data objects that fall outside the scope of tasks and threads and then a protected message queue, to manage the messages between tasks and rights of access. They stood it up on a DEC VAX and released it publicly in 1987.

Here’s the thing, Unix licensing from Bell Labs was causing problems. So it was important to everyone that the license be open. And this would be important to NeXT as well. NeXT needed a next-generation operating system and so Avi Tevanian was recruited to join NeXT as the Vice President of Software Engineering. There, he designed NeXTSTEP with a handful of engineers.

The computers had custom boards and were fast. And they were a sleek black like nothing I’d seen before. But Bill Gates was not impressed claiming that “If you want black, I’ll get you a can of paint.” But some people loved the machines and especially some of the tools NeXT developed for programmers.

They got a factory to produce the machines and it only needed to crank out 100 a month as opposed to the thousands it was built to produce. In other words, the price tag was keeping universities from buying the machines. So they pivoted a little. They went up-market with the NeXTcube in 1990, which ran NeXTSTEP, OPENSTEP, or NetBSD and came with the Motorola 68040 CPU. This came machine in at $8,000 to almost $16,000. It came with a hard drive. For the lower end of the market they also released the NeXTstation in 1990, which shipped for just shy of $5,000.

The new models helped but by 1991 they had to lay off 5 percent of the company and another 280 by 1993. That’s when the hardware side got sold to Canon so NeXT could focus exclusively on NeXTSTEP.  That is, until they got acquired by Apple in 1997.

By the end, they’d sold around 50,000 computers. Apple bought NeXT for $429 million and 1.5 million shares of Apple stock, trading at 22 cents at the time, which was trading at $17 a share so worth another $25 and a half million dollars. That makes the deal worth $454 million or $9,080 per machine NeXT had ever built. But it wasn’t about the computer business, which had already been spun down. It was about Jobs and getting a multi-tasking, object-oriented, powerhouse of an operating system, the grandparent of OS X - and the derivative macOS, iOS, iPadOS, watchOS, and tvOS forks.

The work done at NeXT has had a long-term impact on the computer industry as a whole. For one, the spinning pinwheel on a Mac. And the Dock. And the App Store. And Objective-C. But also Interface Builder as an IDE was revolutionary. Today we use Xcode. But many of the components go back all the way. And so much more. 

After the acquisition, NeXT became Mac OS X Server in 1999 and by 2001 was Mac OS X. The rest there is history. But the legacy of the platform is considerable. Just on NeXTSTEP we had a few pretty massive successes.

Tim Berners-Lee developed the first web browser WorldWideWeb on NeXTSTEP for a NeXT . Other browsers for other platforms would come but his work became the web as we know it today. The machine he developed the web on is now on display at the National Museum of Science and Media in the UK.

We also got games like Quake, Heretic, Stife, and Doom from Interface Builder. And webobjects. And the people. 

Tevanian came with NeXT to Apple as the Senior Vice President of Software Engineering. Jobs became an advisor, then CEO. Craig Federighi came with the acquisition as well - now Apple’s VP of software engineering. And I know dozens of others who came in from NeXT and helped reshape the culture at Apple. still redirects to It took three years to ship that first computer at NeXT. It took 2 1/2 years to develop the iPhone. The Apple II, iPod, iPad, and first iMac were much less. Nearly 5 years for the original Mac. Some things take a little more time to flush out than others. Some need the price of components or new components to show up before you know it can be insanely great. Some need false starts like the Steve Jobs Steve Jobs famously said Apple wanted to create a computer in a book in 1983. That finally came out with the release of the iPad in 2010, 27 years later. 

And so the final component of the Apple acquisition of NeXT to mention is Steve Jobs himself. He didn’t initially come in. He’d just become a billionaire off Pixar and was doing pretty darn well. His arrival back at Apple signified the end of a long draught for the company and all those products we mentioned and the iTunes music store and the App Store (both initially built on WebObjects) would change the way we consume content forever. His impact was substantial. For one, after factoring stock splits, the company might still be trading at .22 cents a share, which is what it would be today with all that. Instead they’re the most highly valued company in the world. But that pales in comparison to the way he and his teams and that relentless eye to product and design has actually changed the world. And the way his perspectives on privacy help protect us today, long after he passed. 

The heroes journey (as described is a storytelling template that follows a hero from disgrace, to learn the mistakes of their past and reinvent themselves amidst a crisis throughout a grand adventure, and return home transformed. NeXT and Pixar represent part of that journey here. Which makes me wonder: what is my own Monomyth? Where will I return to? What is or was my abyss? These can be large or small. And while very few people in the world will have one like Steve Jobs did, we should all reflect on ours and learn from them. And yes that was plural because life is not so simple that there is one.

The past, and our understanding of it, predicts the future. Good luck on your journey. 

Apple's Lost Decade


I often think of companies in relation to their contribution to the next evolution in the forking and merging of disciplines in computing that brought us to where we are today. Many companies have multiple contributions. Few have as many such contributions as Apple. But there was a time when they didn’t seem so innovative. 

This lost decade began about half way through the tenure of John Sculley and can be seen through the lens of the CEOs. There was Sculley, CEO from 1983 to 1993. Co-founders and spiritual centers of Apple, Steve Jobs and Steve Wozniak, left Apple in 1985. Jobs to create NeXT and Wozniak to jump into a variety of companies like making universal remotes, wireless GPS trackers, and and other adventures. 

This meant Sculley was finally in a position to be fully in charge of Apple. His era would see sales 10x from $800 million to $8 billion. Operationally, he was one of the more adept at cash management, putting $2 billion in the bank by 1993. Suddenly the vision of Steve Jobs was paying off. That original Mac started to sell and grow markets. But during this time, first the IBM PC and then the clones, all powered by the Microsoft operating system, completely took the operating system market for personal computers. Apple had high margins yet struggled for relevance. 

Under Sculley, Apple released HyperCard, funded a skunkworks team in General Magic, arguably the beginning of ubiquitous computing, and using many of those same ideas he backed the Newton, coining the term personal digital assistant. Under his leadership, Apple marketing sent 200,000 people home with a Mac to try it out. Put the device in the hands of the people is probably one of the more important lessons they still teach newcomers that work in Apple Stores. 

Looking at the big financial picture it seems like Sculley did alright. But in Apple’s fourth-quarter earnings call in 1993, they announced a 97 drop from the same time in 1992. This was also when a serious technical debt problem began to manifest itself. 

The Mac operating system grew from the system those early pioneers built in 1984 to Macintosh System Software going from version 1 to version 7. But after annual releases leading to version 6, it took 3 years to develop system 7 and the direction to take with the operating system caused a schism in Apple engineering around what would happen once 7 shipped. Seems like most companies go through almost the exact same schism. Microsoft quietly grew NT to resolve their issues with Windows 3 and 95 until it finally became the thing in 2000. IBM had invested heavily into that same code, basically, with Warp - but wanted something new. 

Something happened while Apple was building macOS 7. They lost Jean Lois Gasseé who had been head of development since Steve Jobs left. When Sculley gave everyone a copy of his memoir, Gasseé provided a copy of The Mythical Man-Month, from Fred Brooks’ experience with the IBM System 360. It’s unclear today if anyone read it. To me this is really the first big sign of trouble. Gassée left to build another OS, BeOS. 

By the time macOS 7 was released, it was clear that the operating system was bloated, needed a massive object-oriented overhaul, and under Sculley the teams were split, with one team eventually getting spun off into its own company and then became a part of IBM to help with their OS woes. The team at Apple took 6 years to release the next operating system. Meanwhile, one of Sculley’s most defining decisions was to avoid licensing the Macintosh operating system. Probably because it was just too big a mess to do so. And yet everyday users didn’t notice all that much and most loved it. 

But third party developers left. And that was at one of the most critical times in the history of personal computers because Microsoft was gaining a lot of developers for Windows 3.1 and released the wildly popular Windows 95. 

The Mac accounted for most of the revenue of the company, but under Sculley the company dumped a lot of R&D money into the Newton. As with other big projects, the device took too long to ship and when it did, the early PDA market was a red ocean with inexpensive competitors. The Palm Pilot effectively ended up owning that pen computing market. 

Sculley was a solid executive. And he played the part of visionary from time to time. But under his tenure Apple found operating system problems, rumors about Windows 95, developers leaving Apple behind for the Windows ecosystem, and whether those technical issues are on his lieutenants or him, the buck stocks there. The Windows clone industry led to PC price wars that caused Apple revenues to plummet. And so Markkula was off to find a new CEO. 

Michael Spindler became the CEO from 1993 to 1996. The failure of the Newton and Copland operating systems are placed at his feet, even though they began in the previous regime. Markkula hired Digital Equipment and Intel veteran Spindler to assist in European operations and he rose to President of Apple Europe and then ran all international. He would become the only CEO to have no new Mac operating systems released in his tenure. Missed deadlines abound with Copland and then Tempo, which would become Mac OS 8. 

And those aren’t the only products that came out at the time. We also got the PowerCD, the Apple QuickTake digital camera, and the Apple Pippin. Bandai had begun trying to develop a video game system with a scaled down version of the Mac. The Apple Pippin realized Markkula’s idea from when the Mac was first conceived as an Apple video game system. 

There were a few important things that happened under Spindler though. First, Apple moved to the PowerPC architecture. Second, he decided to license the Macintosh operating system to companies wanting to clone the Macintosh. And he had discussions with IBM, Sun, and Philips to acquire Apple. Dwindling reserves, increasing debt. Something had to change and within three years, Spindler was gone.

Gil Amelio was CEO from 1996 to 1997. He moved from the board while the CEO at National Semiconductor to CEO of Apple. He inherited a company short on cash and high on expenses. He quickly began pushing forward OS 8, cut a third of the staff, streamline operations, dumping some poor quality products, and releasing new products Apple needed to be competitive like the Apple Network Server. 

He also tried to acquire BeOS for $200 million, which would have Brough Gassée back but instead acquired NeXT for $429 million. But despite the good trajectory he had the company on, the stock was still dropping, Apple continued to lose money, and an immovable force was back - now with another decade of experience launching two successful companies: NeXT and Pixar. 

The end of the lost decade can be seen as the return of Steve Jobs. Apple didn’t have an operating system. They were in a lurch soy-to-speak. I’ve seen or read it portrayed that Steve Jobs intended to take control of Apple. And I’ve seen it portrayed that he was happy digging up carrots in the back yard but came back because he was inspired by Johnny Ive. But I remember the feel around Apple changed when he showed back up on campus. As with other companies that dug themselves out of a lost decade, there was a renewed purpose. There was inspiration. 

By 1997, one of the heroes of the personal computing revolution, Steve Jobs, was back. But not quite… He became interim CEO in 1997 and immediately turned his eye to making Apple profitable again. Over the past decade, the product line expanded to include a dozen models of the Mac. Anyone who’s read Geoffrey Moore’s Crossing the Chasm, Inside the Tornado, and Zone To Win knows this story all too well. We grow, we release new products, and then we eventually need to take a look at the portfolio and make some hard cuts. 

Apple released the Macintosh II in 1987 then the Macintosh Portable in 1989 then the Iicx and II ci in 89 along with the Apple IIgs, the last of that series. By facing competition in different markets, we saw the LC line come along in 1990 and the Quadra in 1991, the same year three models of the PowerBook were released. Different printers, scanners, CD-Roms had come along by then and in 1993, we got a Macintosh TV, the Apple Newton, more models of the LC and by 1994 even more of those plus the QuickTake, Workgroup Server, the Pippin and by 1995 there were a dozen Performas, half a dozen Power Macintosh 6400s, the Apple Network Server and yet another versions of the Performa 6200 and we added the eMade and beige G3 in 1997. The SKU list was a mess. Cleaning that up took time but helped prepare Apple for a simpler sales process. Today we have a good, better, best with each device, with many a computer being build-to-order. 

Jobs restructured the board, ending the long tenure of Mike Markkula, who’d been so impactful at each stage of the company so far. One of the forces behind the rise of the Apple computer and the Macintosh was about to change the world again, this time as the CEO. 

The Unlikely Rise Of The Macintosh


There was a nexus of Digital Research and Xerox PARC, along with Stanford and Berkeley in the Bay Area. The rise of the hobbyists and the success of Apple attracted some of the best minds in computing to Apple. This confluence was about to change the world. One of those brilliant minds that landed at Apple started out as a technical writer. 

Apple hired Jef Raskin as their 31st employee, to write the Apple II manual. He quickly started harping on people to build a computer that was easy to use. Mike Markkula wanted to release a gaming console or a cheap computer that could compete with the Commodore and Atari machines at the time. He called the project “Annie.”

The project began with Raskin, but he had a very different idea than Markkula’s. He summed it up in an article called “Computers by the Millions” that wouldn’t see publication until 1982. His vision was closer to his PhD dissertation, bringing computing to the masses. For this, he envisioned a menu driven operating system that was easy to use and inexpensive. Not yet a GUI in the sense of a windowing operating system and so could run on chips that were rapidly dropping in price. He planned to use the 6809 chip for the machine and give it a five inch display. 

He didn’t tell anyone that he had a PhD when he was hired, as the team at Apple was skeptical of academia. Jobs provided input, but was off working on the Lisa project, which used the 68000 chip. So they had free reign over what they were doing. 

Raskin quickly added Joanna Hoffman for marketing. She was on leave from getting a PhD in archaeology at the University of Chicago and was the marketing team for the Mac for over a year. They also added Burrell Smith, employee #282 from the hardware technician team, to do hardware. He’d run with the Homebrew Computer Club crowd since 1975 and had just strolled into Apple one day and asked for a job. 

Raskin also brought in one of his students from the University of California San Diego who was taking a break from working on his PhD in neurochemistry. Bill Atkinson became employee 51 at Apple and joined the project. They pulled in Andy Hertzfeld, who Steve Jobs hired when Apple bought one of his programs as he was wrapping up his degree at Berkeley and who’d been sitting on the Apple services team and doing Apple III demos.

They added Larry Kenyon, who’d worked at Amdahl and then on the Apple III team. Susan Kare came in to add art and design. They, along with Chris Espinosa - who’d been in the garage with Jobs and Wozniak working on the Apple I, ended up comprising the core team.

Over time, the team grew. Bud Tribble joined as the manager for software development. Jerrold Manock, who’d designed the case of the Apple II, came in to design the now-iconic Macintosh case. The team would eventually expand to include Bob Belleville, Steve Capps, George Crow, Donn Denman, Bruce Horn, and Caroline Rose as well. It was still a small team. And they needed a better code name. But chronologically let’s step back to the early project. 

Raskin chose his favorite Apple, the Macintosh, as the codename for the project. As far as codenames go it was a pretty good one. So their mission would be to ship a machine that was easy to use, would appeal to the masses, and be at a price point the masses could afford. They were looking at 64k of memory, a Motorola 6809 chip, and a 256 bitmap display. Small, light, and inexpensive.

Jobs’ relationship with the Lisa team was strained and he was taken off of that and he started moving in on the Macintosh team. It was quickly the Steve Jobs show. 

Having seen what could be done with the Motorola 68000 chip on the Lisa team, Jobs had them redesign the board to work with that. After visiting Xerox PARC at Raskin’s insistence, Jobs finally got the desktop metaphor and true graphical interface design. 

Xerox had not been quiet about the work at PARC. Going back to 1972 there were even television commercials. And Raskin had done time at PARC while on sabbatical from Stanford. Information about Smalltalk had been published and people like Bill Atkinson were reading about it in college. People had been exposed to the mouse all around the Bay Area in the 60s and 70s or read Engelbart’s scholarly works on it. Many of the people that worked on these projects had doctorates and were academics. They shared their research as freely as love was shared during that counter-culture time. Just as it had passed from MIT to Dartmouth and then in the back of Bob Albrecht’s VW had spread around the country in the 60s. That spirit of innovation and the constant evolutions over the past 25 years found their way to Steve Jobs. 

He saw the desktop metaphor and mouse and fell in love with it, knowing they could build one for less than the $400 unit Xerox had. He saw how an object-oriented programming language like Smalltalk made all that possible. The team was already on their way to the same types of things and so Jobs told the people at PARC about the Lisa project, but not yet about the Mac. In fact, he was as transparent as anyone could be. He made sure they knew how much he loved their work and disclosed more than I think the team planned on him disclosing about Apple. 

This is the point where Larry Tesler and others realized that the group of rag-tag garage-building Homebrew hackers had actually built a company that had real computer scientists and was on track to changing the world. Tesler and some others would end up at Apple later - to see some of their innovations go to a mass market. Steve Jobs at this point totally bought into Raskin’s vision. Yet he still felt they needed to make compromises with the price and better hardware to make it all happen. 

Raskin couldn’t make the kinds of compromises Jobs wanted. He also had an immunity to the now-infamous Steve Jobs reality distortion field and they clashed constantly. So eventually Raskin the project just when it was starting to take off. Raskin would go on to work with Canon to build his vision, which became the Canon CAT. 

With Raskin gone, and armed with a dream team of mad scientists, they got to work, tirelessly pushing towards shipping a computer they all believed would change the world. Jobs brought in Fernandez to help with projects like the macOS and later HyperCard. Wozniak had a pretty big influence over Raskin in the early days of the Mac project and helped here and there withe the project, like with the bit-serial peripheral bus on the Mac. 

Steve Jobs wanted an inexpensive mouse that could be manufactured en masse. Jim Yurchenco from Hovey-Kelley, later called Ideo, got the task - given that trusted engineers at Apple had full dance cards. He looked at the Xerox mouse and other devices around - including trackballs in Atari arcade machines. Those used optics instead of mechanical switches. As the ball under the mouse rolled beams of light would be interrupted and the cost of those components had come down faster than the technology in the Xerox mouse.  He used a ball from a roll-on deodorant stick and got to work. The rest of the team designed the injection molded case for the mouse. That work began with the Lisa and by the time they were done, the price was low enough that every Mac could get one. 

Armed with a mouse, they figured out how to move windows over the top of one another, Susan Kare designed iconography that is a bit less 8-bit but often every bit as true to form today. Learning how they wanted to access various components of the desktop, or find things, they developed the Finder. Atkinson gave us marching ants, the concept of double-clicking, the lasso for selecting content, the menu bar, MacPaint, and later, HyperCard. 

It was a small team, working long hours. Driven by a Jobs for perfection. Jobs made the Lisa team the enemy. Everything not the Mac just sucked. He took the team to art exhibits. He had the team sign the inside of the case to infuse them with the pride of an artist. He killed the idea of long product specifications before writing code and they just jumped in, building and refining and rebuilding and rapid prototyping. The team responded well to the enthusiasm and need for perfectionism. 

The Mac team was like a rebel squadron. They were like a start-up, operating inside Apple. They were pirates. They got fast and sometimes harsh feedback. And nearly all of them still look back on that time as the best thing they’ve done in their careers. 

As IBM and many learned the hard way before them, they learned a small, inspired team, can get a lot done. With such a small team and the ability to parlay work done for the Lisa, the R&D costs were minuscule until they were ready to release the computer. And yet, one can’t change the world over night. 1981 turned into 1982 turned into 1983. 

More and more people came in to fill gaps. Collette Askeland came in to design the printed circuit board. Mike Boich went to companies to get them to write software for the Macintosh. Berry Cash helped prepare sellers to move the product. Matt Carter got the factory ready to mass produce the machine. Donn Denman wrote MacBASIC (because every machine needed a BASIC back then). Martin Haeberli helped write MacTerminal and Memory Manager. Bill Bull got rid of the fan. Patti King helped manage the software library. Dan Kottke helped troubleshoot issues with mother boards. Brian Robertson helped with purchasing. Ed Riddle designed the keyboard. Linda Wilkin took on documentation for the engineering team. It was a growing team. Pamela Wyman and Angeline Lo came in as programmers. Hap Horn and Steve Balog as engineers. 

Jobs had agreed to bring in adults to run the company. So they recruited 44 years old hotshot CEO John Sculley to change the world as their CEO rather than selling sugar water at Pepsi. Scully and Jobs had a tumultuous relationship over time. While Jobs had made tradeoffs on cost versus performance for the Mac, Sculley ended up raising the price for business reasons.

Regis McKenna came in to help with the market campaign. He would win over so much trust that he would later get called out of retirement to do damage control when Apple had an antenna problem on the iPhone. We’ll cover Antenna-gate at some point. They spearheaded the production of the now-iconic 1984 Super Bowl XVIII ad, which shows woman running from conformity and depicted IBM as the Big Brother from George Orwell’s book, 1984. 

Two days after the ad, the Macintosh 128k shipped for $2,495. The price had jumped because Scully wanted enough money to fund a marketing campaign. It shipped late, and the 128k of memory was a bit underpowered, but it was a success. Many of the concepts such as a System and Finder, persist to this day. It came with MacWrite and MacPaint and some of the other Lisa products were soon to follow, now as MacProject and MacTerminal. But the first killer app for the Mac was Microsoft Word, which was the first version of Word ever shipped. 

Every machine came with a mouse. The machines came with a cassette that featured a guided tour of the new computer. You could write programs in MacBASIC and my second language, MacPascal. 

They hit the initial sales numbers despite the higher price. But over time that bit them on sluggish sales. Despite the early success, the sales were declining. Yet the team forged on. They introduced the Apple LaserWriter at a whopping $7,000. This was a laser printer that was based on the Canon 300 dpi engine. Burrell Smith designed a board and newcomer Adobe knew laser printers, given that the founders were Xerox alumni. They added postscript, which had initially been thought up while working with Ivan Sutherland and then implemented at PARC, to make for perfect printing at the time.

The sluggish sales caused internal issues. There’s a hangover  when we do something great. First there were the famous episodes between Jobs, Scully, and the board of directors at Apple. Scully seems to have been portrayed by many to be either a villain or a court jester of sorts in the story of Steve Jobs. Across my research, which began with books and notes and expanded to include a number of interviews, I’ve found Scully to have been admirable in the face of what many might consider a petulant child. But they all knew a brilliant one. 

But amidst Apple’s first quarterly loss, Scully and Jobs had a falling out. Jobs tried to lead an insurrection and ultimately resigned. Wozniak had left Apple already, pointing out that the Apple II was still 70% of the revenues of the company. But the Mac was clearly the future. 

They had reached a turning point in the history of computers. The first mass marketed computer featuring a GUI and a mouse came and went. And so many others were in development that a red ocean was forming. Microsoft released Windows 1.0 in 1985. Acorn, Amiga, IBM, and others were in rapid development as well. 

I can still remember the first time I sat down at a Mac. I’d used the Apple IIs in school and we got a lab of Macs. It was amazing. I could open a file, change the font size and print a big poster. I could type up my dad’s lyrics and print them. I could play SimCity. It was a work of art. And so it was signed by the artists that brought it to us:

Peggy Alexio, Colette Askeland, Bill Atkinson, Steve Balog, Bob Belleville, Mike Boich, Bill Bull, Matt Carter, Berry Cash, Debi Coleman, George Crow, Donn Denman, Christopher Espinosa, Bill Fernandez, Martin Haeberli, Andy Hertzfeld, Joanna Hoffman, Rod Holt, Bruce Horn, Hap Horn, Brian Howard, Steve Jobs, Larry Kenyon, Patti King, Daniel Kottke, Angeline Lo, Ivan Mach, Jerrold Manock, Mary Ellen McCammon, Vicki Milledge, Mike Murray, Ron Nicholson Jr., Terry Oyama, Benjamin Pang, Jef Raskin, Ed Riddle, Brian Robertson, Dave Roots, Patricia Sharp, Burrell Smith, Bryan Stearns, Lynn Takahashi, Guy "Bud" Tribble, Randy Wigginton, Linda Wilkin, Steve Wozniak, Pamela Wyman and Laszlo Zidek.

Steve Jobs left to found NeXT. Some, like George Crow, Joanna Hoffman, and Susan Care, went with him. Bud Tribble would become a co-founder of NeXT and then the Vice President of Software Technology after Apple purchased NeXT.

Bill Atkinson and Andy Hertzfeld would go on to co-found General Magic and usher in the era of mobility. One of the best teams ever assembled slowly dwindled away. And the oncoming dominance of Windows in the market took its toll.

It seems like every company has a “lost decade.” Some like Digital Equipment don’t recover from it. Others, like Microsoft and IBM (who has arguably had a few), emerge as different companies altogether. Apple seemed to go dormant after Steve Jobs left. They had changed the world with the Mac. They put swagger and an eye for design into computing. But in the next episode we’ll look at that long hangover, where they were left by the end of it, and how they emerged to become to change the world yet again. 

In the meantime, Walter Isaacson weaves together this story about as well as anyone in his book Jobs. Steven Levy brilliantly tells it in his book Insanely Great. Andy Hertzfeld gives some of his stories at And countless other books, documentaries, podcasts, blog posts, and articles cover various aspects as well. The reason it’s gotten so much attention is that where the Apple II was the watershed moment to introduce the personal computer to the mass market, the Macintosh was that moment for the graphical user interface.

On Chariots of the Gods?


Humanity is searching for meaning. We binge tv shows. We get lost in fiction. We make up amazing stories about super heroes. We hunt for something deeper than what’s on the surface. We seek conspiracies or... aliens.

I finally got around to reading a book that had been on my list for a long time, recently. Not because I thought I would agree with its assertions - but because it came up from time to time in my research. 

Chariots of the Gods? is a book written in 1968 by German Erich Von Daniken. He goes through a few examples to, in his mind, prove that aliens not only had been to Earth but that they destroyed Sodom with fire and brimstone which he said was a nuclear explosion. He also says the Ark of the Covenant was actually a really big walkie-talkie for calling space. 

Ultimately, the thesis centers around the idea than humans could not possibly have made the technological leaps we did and so must have been given to us from the gods. I find this to be a perfectly satisfactory science fiction plot. In fact, various alien conspiracy theories seemed to begin soon after Orson Welles 1938 live adaption of H.G. Wells’ War of the Worlds and like a virus, they mutated. But did this alien virus start in a bat in Wuhan or in Roman Syria. 

The ancient Greeks and then Romans had a lot of gods. Lucian of Samosata thought they should have a couple more. He wove together a story, which he called “A True Story.” In it, he says it’s all make-believe. Because they believed in multiple pantheons of gods in modern day Syria in the second century AD. In the satire, Lucian and crew get taken to the Moon where they get involved in a war between the Moon and the Sun kings for the rights to colonize the Morning Star. They then get eaten by a whale and escape and travel meeting great Greeks through time including Pythagoras, Homer, and Odysseus. And they find the new world. Think of how many modern plots are wrapped up in that book from the second century, made to effectively make fun of storytellers like Homer?

The 1800s was one of the first centuries where humanity began to inherit a rapid merger and explosion of scientific understanding and Edgar Allan Poe again took us to the moon in "The Unparalleled Adventure of One Hans Pfaall" in 1835. Jules Verne, Mary Shelley, and then H.G. Welles with that War of the Worlds in 1898. By then we’d mapped the surface of the moon with telescopes, so they wrote of Mars and further. H.P. Lovecraft gave us the Call of Cthulhu. These authors predicted the future - but science fiction became a genre that did more. It helped us create satire or allegory or just comparisons to these rapid global changes in ways that called out the social impact to consider before or after we invent. And to just cope with evolving social norms. The magazine Amazing Stories came in 1926 and the greatest work of science fiction premiered in 1942 with Isaac Asimov’s Foundation. Science fiction was opening our eyes to what was possible and opened the minds of scientists to study what we might create in the future. But it wasn’t real. 

Von Daniken and French author Robert Charroux seemed to influence one another in taking history and science and turning them into pseudohistory and pseudoscience. And both got many of their initial ideas from the 1960 book, The Morning of the Magicians. But Chariots of the Gods? was a massive success and a best seller. And rather than be dismissed it has now spread to include conspiracy and other theories. Which is fine as fiction, not as non-fiction. 

Let’s look at some other specific examples from Chariots of the Gods? Von Daniken claims that Japanese Dogu figures were carvings of aliens. He claims there were alien helicopter carvings in an Egyptian temple. He claims the Nazca lines in Peru were a way to call aliens and that a map from 1513 actually showed the earth from space rather than thinking it possible that cartography was capable of showing a somewhat accurate representation of the world in the Age of Discovery. He claimed stories in the Bible were often inspired by alien visits much as some First Nation peoples and cargo cults thought people in ships visiting their lands for the first time might be gods. 

The one thing I’ve learned researching these episodes is that technology has been a constant evolution. Many of our initial discoveries like fire, agriculture, and using the six simple machines could be observed in nature. From the time we learned to make fire, it was only a matter of time before humanity discovered that stones placed in or around fire might melt in certain ways - and so metallurgy was born. We went through population booms as we discovered each of these.

We used the myths and legends that became religions to hand down knowledge, as I was taught to use mnemonics to memorize the seven layers of the OSI model. That helped us preserve knowledge of astronomy across generations so we could explore further and better maintain our crops. 

The ancient Sumerians then Babylonians gave us writing. But we had been drawing on caves for thousands of years. Which seems more likely, that we were gifted this advance or that as we began to settle in more dense urban centers that we out of a need to scale operations tracked the number of widgets we had with markings that, over time evolved into a written language? First through pictures and then through words that evolved into sentences and then epics? We could pass down information more reliably across generation. 

Trade and commerce and then ziggurats and pyramids help hone our understanding of mathematics. The study of logic and automata allowed us to build bigger and faster and process more raw materials. Knowledge of all of these discoveries spread across trade routes. 

So ask yourself this. Which is more likely, the idea that humans maintained a constant, ever-evolving stream of learned ingenuity that was passed down for tens of thousands of years until it accelerated when we learned to write, or do you think aliens from outer space instead gave us technology? 

I find it revokes our very agency to assert anything but the idea that humans are capable of the fantastic feats we have reached and believe it insulting to take away from the great philosophers, discoverers, scientists, and thinkers that got us where we are today. 

Our species has long made up stories to explain that which the science of the day cannot. Before we understand the why, we make up stories about the how. This allowed us to pass knowledge down between generations. We see this in ancient explanations of the movements of stars before we had astrolabes. We see humans want to leave something behind that helps the next generations, or burial sites like with Stonehenge - not summon Thor from an alien planet as Marvel has rewritten their own epics to indicate. In part based on rethinking these mythos in the context of Chariots of the Gods?

Ultimately the greater our gaps in understanding, the more disconnected with ourselves I find that most people are. We listen to talking heads rather than think for ourselves. We get lost in theories of cabals. We seek a deeper, missing knowledge because we can’t understand everything in front of us. 

Today, if we know where to look, and can decipher the scientific jargon, all the known knowledge of science and history are at our fingertips. But it can take a lifetime to master one of thousands of fields of scientific research. If we don’t have that specialty then we can perceive it as unreachable and think maybe this pseudohistorical account of humanity is true and maybe aliens gave us 

If we feel left behind then it becomes easier to blame others when we can’t get below the surface of complicated concepts. Getting left behind might mean that jobs don’t pay what they paid our parents. We may perceive others as getting attention or resources we feel we deserve. We may feel isolated and alone. And all of those are valid feelings. When they’re heard then maybe we can look to the future instead of accepting pseudoscience and pseudohistory and conspiracies. Because while they make for fun romps on the big screen, they’re dangerous when taken as fact.

The Apple Lisa


Apple found massive success on the back of the Apple II. They went public like many of the late 70s computer companies and the story could have ended there, as it did for many computer companies of the era who were potentially bigger, had better technology, better go to market strategies, and/or even some who were far more innovative. 

But it didn’t. The journey to the next stage began with the Apple IIc, Apple IIgs, and other incrementally better, faster, or smaller models. Those funded the research and development of a number of projects. One was a new computer: the Lisa. I bet you thought we were jumping into the Mac next. Getting there. But twists and turns, as the title suggests. 

The success of the Apple II led to many of the best and brightest minds in computers wanting to go work at Apple. Jobs came to be considered a visionary. The pressure to actually become one has been the fall of many a leader. And Jobs almost succumbed to it as well. 

Some go down due to a lack of vision, others because they don’t have the capacity for executional excellence. Some lack lieutenants they can trust. The story isn’t clear with Jobs. He famously sought perfection. And sometimes he got close. 

The Xerox Palo Alto Research Center, or PARC for short, had been a focal point of raw research and development, since 1970. They inherited many great innovations, outlandish ideas, amazing talent, and decades of research from academia and Cold War-inspired government grants. Ever since Sputnik, the National Science Foundation and the US Advanced Research Projects Agency had funded raw research. During Vietnam, that funding dried up and private industry moved in to take products to market. 

Arthur Rock had come into Xerox in 1969, on the back of an investment into Scientific Data Systems. While on the board of Xerox, he got to see the advancements being made at PARC. PARC hired some of the oNLine System (NLS) team who worked to help ship the Xerox Alto in 1973, shipping a couple thousand computers. They followed that up with the Xerox Star in 1981, selling about 20,000. But PARC had been at it the whole time, inventing all kinds of goodness. 

And so always thinking of the next computer, Apple started the Lisa project in 1978, the year after the release of the Apple II, when profits were just starting to roll in. 

Story has it that Steve Jobs secured a visit to PARC and made out the back with the idea for a windowing personal computer GUI complete with a desktop metaphor. But not so fast. Apple had already begun the Lisa and Macintosh projects before Jobs visited Xerox. And after the Alto was shown off internally at Xerox in 1977, complete with Mother of All Demo-esque theatrics on stages using remote computers. They had the GUI, the mouse, and networking - while the other computers released that year, the Apple II, Commodore, and TRS-80 were still doing what Dartmouth, the University of Illinois, and others had been doing since the 60s - just at home instead of on time sharing computers. 

In other words, enough people in computing had seen the oNLine System from Stanford. The graphical interface was coming and wouldn’t be stopped. The mouse had been written about in scholarly journals. But it was all pretty expensive. The visits to PARC, and hiring some of the engineers, helped the teams at Apple figure out some of the problems they didn’t even know they had. They helped make things better and they helped the team get there a little quicker. But by then the coming evolution in computing was inevitable. 

Still, the Xerox Star was considered a failure. But Apple said “hold my beer” and got to work on a project that would become the Lisa. It started off simply enough: some ideas from Apple executives like Steve Jobs and then 10 people, led by Ken Rothmuller, to develop a system with windows and a mouse. Rothmuller got replaced with John Couch, Apple’s 54th employee. Trip Hawkins got a great education in marketing on that team. He would later found Electronic Arts, one of the biggest video game publishers in the world.

Larry Tesler from the Stanford AI Lab and then Xerox PARC joined the team to run the system software team. He’d been on ARPANet since writing Pub an early markup language and was instrumental in the Gypsy Word Processor, Smalltalk, and inventing copy and paste. Makes you feel small to think of some of this stuff. 

Bruce Daniels, one of the Zork creators from MIT, joined the team from HP as the software manager. 

Wayne Rosing, formerly of Digital and Data General, was brought in to design the hardware. He’d later lead the Sparc team and then become a VP of Engineering at Google.  

The team grew. They brought in Bill Dresselhaus as a principal product designer for the look and use and design and even packaging. They started with a user interface and then created the hardware and applications. 

Eventually there would be nearly 100 people working on the Lisa project and it would run over $150 million in R&D. After 4 years, they were still facing delays and while Jobs had been becoming more and more involved, he was removed from the project. The personal accounts I’ve heard seem to be closer to other large out of control projects at companies that I’ve seen though. 

The Apple II used that MOS 6502 chip. And life was good. The Lisa used the Motorola 68000 at 5 MHz. This was a new architecture to replace the 6800. It was time to go 32-bit. 

The Lisa was supposed to ship with between 1 and 2 megabytes of RAM. It had a built-in 12 inch screen that was 720 x 364. 

They got to work building applications, releasing LisaWrite, LisaCalc, LisaDraw, LisaGraph, LisaGuide, LisaList, LisaProject, and LisaTerminal. They translated it to British English, French, German, Italian, and Spanish. 

All the pieces were starting to fall into place. But the project kept growing. And delays. Jobs got booted from the Lisa project amidst concerns it was bloated, behind schedule, wasting company resources, and that Jobs’ perfectionism was going to result in a product that could never ship. The cost of the machine was over $10,000. 

Thing is, as we’ll get into later, every project went over budget and ran into delays for the next decade. Great ideas could then be capitalized on by others - even if a bit watered down. Some projects need to teach us how not to do projects - improve our institutional knowledge about the project or product discipline. That didn’t exactly happen with Lisa. 

We see times in the history of computing and technology for that matter, when a product is just too far advanced for its time. That would be the Xerox Alto. As costs come down, we can then bring ideas to a larger market. That should have been the Lisa. But it wasn’t. While nearly half the cost of a Xerox Star, less than half the number of units were sold.

Following the release of the Lisa, we got other desktop metaphors and graphical interfaces. Agat out of the Soviet Union, SGI, Visi (makers of Visicalc), GEM from Digital Research, DeskMate from Tandy, Amiga Intuition, Acorn Master Compact, the Arthur for the ARM, and the initial releases of Microsoft Windows. By the late 1980s the graphical interface was ubiquitous and computers were the easiest to use for the novice than they’d ever been before. 

But developers didn’t flock to the system as they’d done with the Apple II. You needed a specialized development workstation so why would they? People didn’t understand the menuing system yet. As someone who’s written command line tools, sometimes they’re just easier than burying buttons in complicated graphical interfaces. 

“I’m not dead yet… just… badly burned. Or sick, as it were.” Apple released the Lisa 2 in 1984. It went for about half the price and was a little more stable. One reason was that the Twiggy disk drives Apple built for the Lisa were replaced with Sony microfloppy drives. This looked much more like what we’d get with the Mac, only with expansion slots. 

The end of the Lisa project was more of a fizzle. After the original Mac was released, Lisa shipped as the Macintosh XL, for $4,000. Sun Remarketing built MacWorks to emulate the Macintosh environment and that became the main application of the Macintosh XL. 

Sun Remarketing bought 5,000 of the Mac XLs and improved them somewhat. The last of the 2,700 Lisa computers were buried in a landfill in Utah in 1989. As the whole project had been, they ended up being a write-off. Apple traded them out for a deep discount on the Macintosh Plus. By then, Steve Jobs was long gone, Apple was all about the Mac and the next year General Magic would begin ushering in the era of mobile devices. 

The Lisa was a technical marvel at the time and a critical step in the evolution of the desktop metaphor, then nearly twenty years old, beginning at Stanford on NASA and ARPA grants, evolving further at PARC when members of the team went there, and continuing on at Apple. The lessons learned in the Lisa project were immense and helped inform the evolution of the next project, the Mac. But might the product have actually gained traction in the market if Steve Jobs had not been telling people within Apple and outside that the Mac was the next thing, while the Apple II line was still accounting for most of the revenue of the company? There’s really no way to tell. The Mac used a newer Motorola 68000 at nearly 8 megahertz so was faster, the OS was cleaner, the machine was prettier. It was smaller, boxier like the newer Japanese cars at the time. It was just better. But it probably couldn’t have been if not for the Lisa.

Lisa was slower than it was supposed to be. The operating system tended to be fragile. There were recalls. Steve Jobs was never afraid to cannibalize a product to make the next awesome thing. He did so with Lisa. If we step back and look at the Lisa as an R&D project, it was a resounding success. But as a public company, the shareholders didn’t see it that way at the time. 

So next time there’s an R&D project running amuck, think about this. The Lisa changed the world, ushering in the era of the graphical interface. All for the low cost of $50 million after sales of the device are taken out of it. But they had to start anew with the Mac and only bring in the parts that worked. They built out too much technical debt while developing the product to do anything else. While it can be painful - sometimes it’s best to start with a fresh circuit board and a blank command line editor. Then we can truly step back and figure out how we want to change the world.

Apple: The Apple I computer to the ///


I’ve been struggling with how to cover a few different companies, topics, or movements for awhile. The lack of covering their stories thus far has little to do with their impact but just trying to find where to put them in the history of computing. One of the most challenging is Apple. This is because there isn’t just one Apple. Instead there are stages of the company, each with their own place in the history of computers. 

Today we can think of Apple as one of the Big 5 tech companies, which include Amazon, Apple, Google, Facebook, and Microsoft. But there were times in the evolution of the company where things looked bleak. Like maybe they would get gobbled up by another tech company. To oversimplify the development of Apple, we’ll break up their storied ascent into four parts:

  • Apple Computers: This story covers the mid-1970s to mid 1980s and covers Apple rising out of the hobbyist movement and into a gangbuster IPO. The Apple I through III families all centered on one family of chips and took the company into the 90s.
  • The Macintosh: The rise and fall of the Mac covers the introduction of the now-iconic Mac through to the Power Macintosh era. 
  • Mac OS X: This part of the Apple story begins with the return of Steve Jobs to Apple and the acquisition of NeXT, looks at the introduction of the Intel Macs and takes us through to the transition to the Apple M1 CPU.
  • Post PC: Steve Jobs announced the “post PC” era in 2007, and in the coming years the sales of PCs fell for the first time, while tablets, phones, and other devices emerged as the primary means people used devices. 

We’ll start with the early days, which I think of as one of the four key Apple stages of development. And those early days go back far past the days when Apple was hocking the Apple I. They go to high school.

Jobs and Woz

Bill Fernandez and Steve Wozniak built a computer they called “The Cream Soda Computer” in 1970 when Bill was 16 and Woz was 20. It was a crude punch card processing machine built from some parts Woz got from the company he was working for at the time.

Fernandez introduced Steve Wozniak to a friend from middle school because they were both into computers and both had a flare for pranky rebelliousness. That friend was Steve Jobs. 

By 1972, the pranks turned into their first business. Wozniak designed Blue Boxes, initially conceived by Cap’n Crunch John Draper, who got his phreaker name from a whistle in a Cap’n Crunch box that made a tone in 2600 Hz that sent AT&T phones into operator mode. Draper would actually be an Apple employee for a bit. They designed a digital version and sold a few thousand dollars worth. 

Jobs went to Reed College. Wozniak went to Berkely. Both dropped out. 

Woz got a sweet gig at HP designing calculators, where Jobs had worked a summer job in high school.  India to find enlightenment. When Jobs became employee number 40 at Atari, he got Wozniak to help create Breakout. That was the year The Altair 8800 was released and Wozniak went to the first meeting of a little club called the Homebrew Computer Club in 1975 when they got an Altair so the People’s Computer Company could review it. And that was the inspiration. Having already built one computer with Fernandez, Woz designed schematics for another. Going back to the Homebrew meetings to talk through ideas and nerd out, he got it built and proud of his creation, returned to Homebrew with Jobs to give out copies of the schematics for everyone to play with. This was the age of hackers and hobbyists. But that was about to change ever so slightly. 

The Apple I 

Jobs had this idea. What if they sold the boards. They came up with a plan. Jobs sold his VW Microbus and Wozniak sold his HP-65 calculator and they got to work. Simple math. They could sell 50 boards for $40 bucks each and make some cash like they’d done with the blue boxes. But you know, a lot of people didn’t know what to do with the board. Sure, you just needed a keyboard and a television, but that still seemed a bit much. 

Then a little bigger plan - what if they sold 50 full computers. They went to the Byte Shop and talked them into buying 50 for $500. They dropped $20,000 on parts and netted a $5,000 return. They’d go on to sell about 200 of the Apple Is between 1976 and 1977.

It came with a MOS 6502 chip running at a whopping 1 MHz and with 4KB of memory, which could go to 8. They provided Apple BASIC, as most vendors did at the time. That MOS chip was critical. Before it, many used an Intel or the Motorola 6800, which went for $175. But the MOS 6502 was just $25. It was an 8-bit microprocessor designed by a team that Chuck Peddle ran after leaving the 6800 team at Motorola. Armed with that chip at that price, and with Wozniak’s understanding of what it needed to do and how it interfaced with other chips to access memory and peripherals, the two could do something new. 

They started selling the Apple 1 and to quote an ad “the Apple comes fully assembled, tested & burned-in and has a complete power supply on-board, initial set-up is essentially “hassle free” and you can be running in minutes.” This really tells you something about the computing world at the time. There were thousands of hobbyists and many had been selling devices. But this thing had on-board RAM and you could just add a keyboard and video and not have to read LEDs to get output. The marketing descriptions were pretty technical by modern Apple standards, telling us something of the users. It sold for $666.66.

They got help from Patty Jobs building logic boards. Jobs’ friend from college Daniel Kottke joined for the summer, as did Fernandez and Chris Espinosa - now Apple’s longest-tenured employee. It was a scrappy garage kind of company. The best kind. 

They made the Apple I until a few months after they released the successor. But the problem with the Apple I was that there was only one person who could actually support it when customers called: Wozniak. And he was slammed, busy designing the next computer and all the components needed to take it to the mass market, like monitors, disk drives, etc. So they offered a discount for anyone returning the Apple I and destroyed most returned. Those Apple I computers have now been auctioned for hundreds of thousands of dollars all the way up to $1.75 million. 

The Apple II

They knew they were on to something. But a lot of people were building computers. They needed capital if they were going to bring in a team and make a go at things. But Steve Jobs wasn’t exactly the type of guy venture capitalists liked to fund at the time.

Mike Markkula was a product-marketing manager at chip makers Fairchild and Intel who retired early after making a small fortune on stock options. That is, until he got a visit from Steve Jobs. He brought money but more importantly the kind of assistance only a veteran of a successful corporation who’d ride that wave could bring. He brought in Michael "Scotty" Scott, employee #4, to be the first CEO and they got to work on mapping out an early business plan. If you notice the overlapping employee numbers, Scotty might have had something to do with that…

As you may notice by Wozniak selling his calculator, at the time computers weren’t that far removed from calculators. So Jobs brought in a calculator designer named Jerry Manock to design a plastic injection molded case, or shell, for the Apple II. They used the same chip and a similar enough motherboard design. They stuck with the default 4KB of memory and provided jumpers to make it easier to go up to 48. They added a cassette interface for IO. They had a toggle circuit that could trigger the built-in speaker. And they would include two game paddles. This is similar to bundles provided with the Commodore and other vendors of the day. And of course it still worked with a standard TV - but now that TVs were mostly color, so was the video coming out of the Apple II. And all of this came at a starting price of $1,298.

The computer initially shipped with a version of BASIC written by Wozniak but Apple later licensed the Microsoft 6502 BASIC to ship what they called Applesoft BASIC, short for Apple and Micorosft. Here, they turned to Randy Wiggington who was Apple’s employee #6 and had gotten rides to the Homebrew Computer Club from Wozniak as a teenager (since he lived down the street). He and others added features onto Microsoft BASIC to free Wozniak to work on other projects. Deciding they needed a disk operating system, or DOS. Here, rather than license the industry standard CP/M at the time, Wigginton worked with Shepardson, who did various projects for CP/M and Atari.  

The motherboard on the Apple II remains an elegant design. There were certain innovations that Wozniak made, like cutting down the number of DRAM chips by sharing resources between other components. The design was so elegant that Bill Fernandez had to join them as employee number four, in order to help take the board and create schematics to have it silkscreened.  The machines were powerful.

All that needed juice. Jobs asked his former boss Al Alcorn for someone to help out with that. Rod Holt, employee number 5, was brought in to design the power supply. By implementing a switching power supply, as Digital Equipment had done in the PDP-11, rather than a transformer-based power supply, the Apple II ended up being far lighter than many other machines. 

The Apple II was released in 1977 at the West Coast Computer Fair. It, along with the TRS-80 and the Commodore PET would become the 1977 Trinity, which isn’t surprising. Remember Peddle who ran the 6502 design team - he designed the PET. And Steve Leininger was also a member of the Homebrew Computer Club who happened to work at National Semiconductor when Radio Shack/Tandy started looking for someone to build them a computer. 

The machine was stamped with an Apple logo. Jobs hired Rob Janoff, a local graphic designer, to create the logo. This was a picture of an Apple made out of a rainbow, showing that the Apple II had color graphics. This rainbow Apple stuck and became the logo for Apple Computers until 1998, after Steve Jobs returned to Apple, when the Apple went all-black, but the silhouette is now iconic, serving Apple for 45 years and counting.

The computers were an instant success and sold quickly. But others were doing well in the market. Some incumbents and some new. Red oceans mean we have to improve our effectiveness. So this is where Apple had to grow up to become a company. Markkula made a plan to get Apple to $500 million in sales in 10 years on the backs of his $92,000 investment and another $600,000 in venture funding. 

They did $2.7 million dollars in sales in 1977. This idea of selling a pre-assembled computer to the general public was clearly resonating. Parents could use it to help teach their kids. Schools could use it for the same. And when we were done with all that, we could play games on it. Write code in BASIC. Or use it for business. Make some documents in Word Star, spreadsheets in VisiCalc, or use one of the thousands of titles available for the Mac. Sales grew 150x until 1980.

Given that many thought cassettes were for home machines and floppies were for professional machines, it was time to move away from tape. Markkela realized this and had Wozniak design a floppy disk for the Apple II, which went on to be known as the Drive II. Wozniak had experience with disk controllers and studied the latest available. Wozniak again managed to come up with a value engineered design that allowed Apple to produce a good drive for less than any other major vendor at the time. Wozniak would actually later go on to say that it was one of his best designs (and many contemporaries agreed).

Markkula filled gaps as well as anyone. He even wrote free software programs under the name of Johnny Appleseed, a name also used for years in product documentation. He was a classic hacker type of entrepreneur on their behalf, sitting in the guerrilla marketing chair some days or acting as president of the company others, and mentor for Jobs in other days.  

From Hobbyists to Capitalists

Here’s the thing - I’ve always been a huge fan of Apple. Even in their darkest days, which we’ll get to in later episodes, they represented an ideal. But going back to the Apple 1, they were nothing special. Even the Apple II. Osborne, Commodore, Vector Graphics, Atari, and hundreds of other companies were springing up, inspired first by that Altair and then by the rapid drop in the prices of chips. 

The impact of the 1 megahertz barrier and cost of those MOS 6502 chips was profound. The MOS 6502 chip would be used in the Apple II, the Atari 2600, the Nintendo NES, the BBY Micro. And along with the Zylog Z80 and Intel 8080 would spark a revolution in personal computers. Many of those companies would disappear in what we’d think of as a personal computer bubble if there was more money in it. But those that survived, took things to an order of magnitude higher. Instead of making millions they were making hundreds of millions. Many would even go to war in a race to the bottom of prices. And this is where Apple started to differentiate themselves from the rest. 

For starters, due to how anemic the default Altair was, most of the hobbyist computers were all about expansion. You can see it on the Apple I schematics and you can see it in the minimum of 7 expansion slots in the Apple II lineup of computers. Well, all of them except the IIc, marketed as a more portable type of device, with a handle and an RCA connection to a television for a monitor. 

The media seemed to adore them. In an era of JR Ewing of Dallas, Steve Jobs was just the personality to emerge and still somewhat differentiate the new wave of computer enthusiasts. Coming at the tail end of an era of social and political strife, many saw something of themselves in Jobs. He looked the counter-culture part. He had the hair, but this drive. The early 80s were going to be all about the yuppies though - and Jobs was putting on a suit. Many identified with that as well.

Fueled by the 150x sales performance shooting them up to $117M in sales, Apple filed for an IPO, going public in 1980, creating hundreds of millionaires, including at least 40 of their own employees. It was the biggest IPO since Ford in 1956, the same year Steve Jobs was born. The stock was filed at $14 and shot up to $29 on the first day alone, leaving Apple sitting pretty on a $1.778 valuation. 

Scotty, who brought the champagne, made nearly a $100M profit. One of the Venture Capitalists, Arthur Rock, made over $21M on a $57,600 investment. Rock had been the one to convince the Shockley Semiconductor team to found Fairchild, a key turning point in putting silicon into the name of Silicon Valley. When Noyce and Moore left there to found Intel, he was involved. And he would stay in touch with Markkula, who was so enthusiastic about Apple that Rock invested and began a stint on the board of directors at Apple in 1978, often portrayed as the villain in the story of Steve Jobs. But let’s think about something for a moment. Rock was a backer of Scientific Data Systems, purchased by Xerox in 1969, becoming the Xerox 500. Certainly not Xerox PARC and in fact, the anti-PARC, but certainly helping to connect Jobs to Xerox later as Rock served on the board of Xerox.

The IPO Hangover

Money is great to have but also causes problems. Teams get sidetracked trying to figure out what to do with their hauls. Like Rod Holt’s $67M haul that day. It’s a distraction in a time when executional excellence is critical. We have to bring in more people fast, which created a scenario Mike Scott referred to as a “bozo explosion.” Suddenly more people actually makes us less effective. 

Growing teams all want a seat at a limited table. Innovation falls off as we rush to keep up with the orders and needs of existing customers. Bugs, bigger code bases to maintain, issues with people doing crazy things. 

Taking our eyes off the ball and normalizing the growth can be hard. By 1981, Scotty was out after leading some substantial layoffs.  Apple stock was down. A big IPO also creates investments in competitors. Some of those would go on a race to the bottom in price. 

Apple didn’t compete on price. Instead, they started to plan the next revolution, a key piece of Steve Jobs emerging as a household name. They would learn what the research and computer science communities had been doing - and bring a graphical interface and mouse to the world with Lisa and a smaller project brought forward at the time by Jef Raskin that Jobs tried to kill - but one that Markkula not only approved, but kept Jobs from killing, the Macintosh. 

Fernandez, Holt, Wigginton, and even Wozniak just drifted away or got lost in the hyper-growth of the company, as is often the case. Some came back. Some didn’t. Many of us go through the same in rapidly growing companies. 

Next (but not yet NeXT)

But a new era of hackers was on the way. And a new movement as counter to the big computer culture as Jobs. But first, they needed to take a trip to Xerox. In the meantime, the Apple III was an improvement but proved that the Apple computer line had run its course. They released it in 1980 and recalled the first 14,000 machines and never peaked 75,000 machines sold, killing off the line in 1984. A special year. 

A Steampunk's Guide To Clockworks: From The Cradle Of Civilization To Electromechanical Computers


We mentioned John Locke in the episode on the Scientific Revolution. And Leibniz. They not only worked in the new branches of science, math, and philosophy, but they put many of their theories to use and were engineers. 

Computing at the time was mechanical, what we might now think of as clockwork. And clockwork was starting to get some innovative new thinking. As we’ve covered, clockworks go back thousands of years. But with a jump in more and more accurate machining and more science, advances in timekeeping were coming. Locke and Huygens worked on pendulum clocks and then moved to spring driven clocks. Both taught English patents and because they didn’t work that well, neither were granted. But more somethings needed to happen to improve the accuracy of time. 

Time was becoming increasingly important. Not only to show up to appointments and computing ever increasing math problems but also for navigation. Going back to the Greeks, we’d been estimating our position on the Earth relative to seconds and degrees. And a rapidly growing maritime power like England at the time needed to use clocks to guide ships. Why?

The world is a sphere. A sphere has 360 degrees which multiplied by 60 minutes is 21,600. The North South circumference is 21603 nautical miles. Actually the world isn’t a perfect sphere so the circumference around the equator is 21,639 nautical miles. Each nautical mile is 6,076 feet. When traveling by sea, trying to do all that math in feet and inches is terribly difficult and so we came up with 180 lines each of latitude, running east-west and longitude running north-south. That’s 60 nautical miles in each line, or 60 minutes. The distance between each naturally goes down as one gets closer to the poles - and goes down a a percentage relative to the distance to those poles. Problem was that the most accurate time to check your position relative to the sun was at noon or to use the Polaris North Star at night.

Much of this went back to the Greeks and further. The Sumerians developed the sexagesimal system, or base 60 and passed it down to the Babylonians in the 3rd millennium BCE and by 2000 BCE gave us the solar year and the sundial. As their empire grew rich with trade and growing cities by 1500 BCE the Egyptians had developed the first water clocks timers, proved by the Karnak water clock, beginning as a controlled amount of water filling up a vessel until it reached marks. Water could be moved - horizontal water wheels were developed as far back as the 4th millennium BCE. 

Both the sundial and the water clock became more precise in the ensuing centuries, taking location and the time of the year into account. Due to water reacting differently in various climates we also got the sandglass, now referred to as the hourglass. 

The sundial became common in Greece by the sixth century BCE, as did the water clock, which they called the clepsydra. By then it had a float that would tell the time. Plato even supposedly added a bowl full of balls to his inflow water clock that would dump them on a copper plate as an alarm during the day for his academy. 

We still use the base 60 scale and the rough solar years from even more ancient times. But every time sixty seconds ticks by something needs to happen to increment a minute and every 60 minutes needs to increment an hour. From the days of Thales in the 600s BCE and earlier, the Greeks had been documenting and studying math and engineering. And inventing. All that gathered knowledge was starting to come together.

Ctesibius was potentially the first to head the Library of Alexandria and while there, developed the siphon, force pumps, compressed air, and so the earliest uses of pneumatics. He is accredited for adding a scale and float thus mechanics. And expanding the use to include water powered gearing that produced sound and moved dials with wheels.

The Greek engineer Philo of Byzantium in the 240s BCE, if not further back, added an escapement to the water clock. He started by simply applying a counterweight to the end of a spoon and as the spoon filled, a ball was released. He also described a robotic maid who, when Greeks put a cup in her hand, poured wine. 

Archimedes added the idea that objects displaced water based on their volume but also mathematical understanding of the six simple machines. He then gets credited for being the first to add a gear to a water clock. We now have gears and escapements. Here’s a thought, given their lifetimes overlapping, Philo, Archimedes, and Ctesibius could have all been studying together at the library. Archimedes certainly continued on with earlier designs, adding a chime to the early water clocks. And Archimedes is often credited for providing us with the first transmission gears.

The Antikythera device proves the greeks also made use of complex gearing. Transferring energy in more complex gearing patterns. It is hand cranked but shows mathematical and gearing mastery by choosing a day and year and seeing when the next eclipse and olympiad would be. And the Greeks were all to happy to use gearing for other devices, such as an odometer in the first century BCE and to build the Tower of the Winds, an entire building that acted as a detailed and geared water clock as well as perhaps a model of the universe. 

And we got the astrolabe at the same time, from Apollonius or Hipparchus. But a new empire had risen. The astrolabe was a circle of metal with an arm called an alidade that users sighted to the altitude of a star and based on that, you could get your location. The gearing was simple but the math required to get accurate readings was not. These were analog computers of a sort - you gave them an input and they produced an output. At this point they were mostly used by astronomers and continued to be used by Western philosophers at least until the Byzantines.

The sundial, water clocks, and many of these engineering concepts were brought to Rome as the empire expanded, many from Greece. The Roman Vitruvius is credited with taking that horizontal water wheel and flipping it vertical in 14 CE. Around the same time, Augustus Caesar built a large sundial in Campus Martius. The Romans also added a rod to cranks giving us sawmills in the third century. The larger the empire the more time people spent in appointments and the more important time became - but also the more people could notice the impact that automata had. Granted much of it was large, like a windmill at the time, but most technology starts huge and miniaturizes as more precision tooling becomes available to increasingly talented craftspeople and engineers. 

Marcus Vitruvius Pollio was an architect who wrote 10 books in the 20s BCE about technology. His works link aqueducts to water-driven machinations that could raise water from mines, driven by a man walking on a wheel above ground like a hamster does today but with more meaning. They took works from the Hellenistic era and put them in use on an industrial scale. This allowed them to terraform lands and spring new cities into existence. Sawing timber with mills using water to move saws allowed them to build faster. And grinding flour with mills allowed them to feed more people.

Heron of Alexandria would study and invent at the Library of Alexandria, amongst scrolls piled to the ceilings in halls with philosophers and mechanics. The inheritor of so much learning, he developed vending machines, statues that moved, and even a steam engine. If the Greeks and early Roman conquered of Alexandria could figure out how a thing work, they could automate it. 

Many automations were to prove the divine. Such as water powered counterweights to open doors when priests summoned a god, and blew compressed air through trumpets. He also used a wind mill to power an organ and a programmable cart using a weight to turn a drive axle. He also developed an omen machine, with ropes and pulleys on a gear that caused a bird to sing, the song driven by a simple whistle being lowered into water. His inventions likely funding more and more research. 

But automations in Greek times were powered by natural forces, be it hand cranked, fire, or powered by water. Heron also created a chain driven automatic crossbow, showing the use of a chain-driven machine and he used gravity to power machines, automating devices as sand escaped from those sand glasses.

He added pegs to pulleys so the distance travelled could be programmed. Simple and elegant machines. And his automata extended into the theater. He kept combining simple machines and ropes and gravity into more and more complex combinations, getting to the point that he could run an automated twenty minute play. Most of the math and mechanics had been discovered and documented in the countless scrolls in the Library of Alexandria. 

And so we get the term automated from the Greek word for acting of oneself. But automations weren’t exclusive to the Greeks. By the time Caligula was emperor of the Roman Empire, bronze valves could be used to feed iron pipes in his floating ships that came complete with heated floors. People were becoming more and more precise in engineering and many a device was for telling time. The word clock comes from Latin for bell or clogga. I guess bells should automatically ring at certain times. Getting there...

Technology spreads or is rediscovered. By Heron the Greeks and Romans understood steam, pistons, gears, pulleys, programmable automations, and much of what would have been necessary for an industrial or steampunk revolution. But slaves were cheap and plentiful in the empire. The technology was used in areas where they weren’t. Such as at Barbegal to feed Arles in modern France, the Romans had a single hillside flour grinding complex with automated hoppers, capable of supplying flour to thousands of Romans. Constantine, the first Christian Roman emperor, was based there before founding Constantinople.

And as Christianity spread, the gimmicks that enthralled the people as magic were no longer necessary. The Greeks were pagans and so many of their works would be cleansed or have Christian writings copied over them. Humanity wasn’t yet ready. Or so we’ve been led to believe. 

The inheritors of the Roman Empire were the Byzantines, based where Europe meets what we now think of as the Middle East. We have proof of geared portable sundials there, fewer gears but showing evidence of the continuation of automata and the math used to drive it persisting in the empire through to the 400s. And maybe confirming written accounts that there were automated lions and thrones in the empire of Constantinople. And one way geared know-how continued and spread was along trade routes which carried knowledge in the form of books and tradespeople and artifacts, sometimes looted from temples. One such trade route was the ancient Silk Road (or roads).

Water clocks were being used in Egypt, Babylon, India, Persia, Greece, Rome, and China. The Tang Dynasty in China took or rediscovered the escapement to develop a water powered clockwork escapement in the 700s and then in the Song Dynasty developed astronomical clock towers in the 900s. By now the escapements Su Sung is often credited for the first mechanical water clock in 1092. And his Cosmic Engine would mark the transition from water clocks to fully mechanical clocks, although still hydromechanical. The 1100s saw Bhoja in the Paramara dynasty of India emerge as a patron of the arts and sciences and write a chapter on mechanical bees and birds. These innovations could have been happening in a vacuum in each - or word and works could have spread through trade. 

That technology disappeared in Europe, such as plumbing in towns that could bring tap water to homes or clockworks, as the Roman Empire retreated. The specialists and engineers lacked the training to build new works or even maintain many that existed in modern England, France, and Germany. But the heads of rising eastern empires were happy to fund such efforts in a sprint to become the next Alexander. And so knowledge spread west from Asia and was infused with Greek and Roman knowhow in the Middle East during the Islamic conquests. The new rulers expanded quickly, effectively taking possession of Egypt, Mesopotamia, parts of Asia, the Turkish peninsula, Greece, parts of Southern Italy, out towards India, and even Spain. In other words, all of the previous centers of science. And they were tolerant, not looking to convert conquered lands to Islam. This allowed them to learn from their subjects in what we now think of as the Arabic translation movement in the 7th century when Arabic philosophers translated but also critiqued and refined works from the lands they ruled.

This sparked the Muslim golden age, which became the new nexus of science at the time. Over time we saw the Seljuks, ruling out of Baghdad, and Abbasids as Islamic empires who funded science and philosophy. They brought caravans of knowledge into their capitals. The Abbasids even insisted on a specific text from Ptolemy (the Almagest) when doing a treaty so they could bring it home for study. They founding of schools of learning known as Madrasas in every town. This would be similar to a university system today.

Over the centuries following, they produced philosophers like Muhammad Ibn Musa Al-Khwarizmi, who solved quadratic equations, giving us algebra. This would become important to make clockwork devices became more programmable (and for everything else algebra is great at helping with). They sent clockworks as gifts, such as a brass automatic water clock sent to Charlemagne between 802 and 807, complete with chimes. Yup, the clogga rang the bell.

They went far past where Heron left off though. There was Ibn-Sina, Al-Razi, Al-Jazari, Al Kindi, Thābit ibn Qurra, Ridwan, and countless other philosophers carrying on the tradition. The philosophers took the works of the Greeks, copied, and studied them. They evolved the technology to increasing levels of sophistication. And many of the philosophers completed their works at what might be considered the Islamic version of the Library of Alexandria, The House of Wisdom in Baghdad. In fact, when Baghdad was founded about 50 miles north of ancient Babylon, the Al-Mansur Palace Library was part of the plan  and over subsequent Caliphs was expanded adding an observatory that would then be called the House of Wisdom.

The Banu Musa brothers worked out of there and wrote twenty books including the first Book of Ingenious Devices. Here, they took the principles the Greeks and others had focused on and got more into the applications of those principles. On the way to their compilation of devices, they translated books from other authors, including A Book on Degrees on the Nature of Zodiacal Signs from China and Greek works.The three brothers combined pneumatics and aerostatics. They added plug valves, taps, float valves, and conical valves. They documented the siphon and funnel for pouring liquids into the machinery and thought to put a float in a chamber to turn what we now think of as the first documented crank shaft. We had been turning circular motion into linear motion with wheels, but we were now able to turn linear motion into circular motion as well.

They used all of this to describe in engineering detail, if not build and invent, marvelous fountains. Some with multiple jets alternating. Some were wind powered and showed worm-and-pinion gearing.  

Al-Biruni, around the turn of the first millennia, came out of modern Uzbekistan and learned the ancient Indian Sanskrit, Persian, Hebrew, and Greek. He wrote 95 books on astronomy and math. He studied the speed of light vs speed of sound, the axis of the earth and applied the scientific method to statics and mechanics. This moved theories on balances and weights forward. He produced geared mechanisms that are the ancestor of modern astrolabes. 

The Astrolabe was also brought to the Islamic world. Muslim astronomers added newer scales and circles. As with in antiquity, they used it in navigation but they had another use, to aid in prayer by showing the way to Mecca. 

Al-Jazari developed a number of water clocks and is credited with others like developed by others due to penning another Book of Knowledge of Ingenious Mechanical Devices. Here, he describes a camshaft, crank dive and reciprocating pumps, two way valves, and expanding on the uses of pneumatic devices. He developed programmable humanoid robots in the form of automatic musicians on a boat. These complex automata included cams and pegs, similar to those developed by Heron of Alexandria, but with increasing levels of sophistication, showing we were understanding the math behind the engineering and it wasn’t just trial and error.

All golden ages must end. Or maybe just evolve and migrate. Fibonacci and Bacon quoted then, showing yet another direct influence from multiple sources around the world flowing into Europe following the Holy Wars. 

Pope Urban II began inspiring European Christian leaders to wage war against the Muslims in 1095. And so the Holy Wars, or Crusades would begin and rage until 1271. Here, we saw manuscripts copied and philosophy flow back into Europe. Equally as important, Muslim Caliphates in Spain and Sicily and trade routes. And another pair of threats were on the rise. The plague and the Mongols. 

The Mongol invasions began in the 1200s and changed the political makeup of the known powers of the day. The Mongols sacked Baghdad and burned the House of Wisdom. After the mongols and Mughals, the Islamic Caliphates had warring factions internally, the empires fractured, and they turned towards more dogmatic approaches. The Ottomon empire rose and would last until World War I, and while they continued to sponsor scientists and great learners, the nexus of scientific inquiry and the engineering that inspired shifted again and the great works were translated with that shift, including into Latin - the language of learning in Europe. By 1492 the Moors would be kicked out of Spain. That link from Europe to the Islamic golden age is a critical aspect of the transfer of knowledge.

The astrolabe was one such transfer. As early as the 11th century, metal astrolabes arrive in France over the Pyrenees to the north and to the west to Portugal . By the 1300s it had been written about by Chaucer and spread throughout Europe. Something else happened in the Iberian peninsula in 1492. Columbus sailed off to discover the New World. He also used a quadrant, or a quarter of an astrolabe. Which was first written about in Ptolemy’s Almagest but later further developed at the House of Wisdom as the sine quadrant. 

The Ottoman Empire had focused on trade routes and trade. But while they could have colonized the New World during the Age of Discovery, they didn’t. The influx of wealth coming from the Americas caused inflation to spiral and the empire went into a slow decline over the ensuing centuries until the Turkish War of Independence, which began in 1919. 

In the meantime, the influx of money and resources and knowledge from the growing European empires saw clockworks and gearing arriving back in Europe in full force in the 14th century. 

In 1368 the first mechanical clock makers got to work in England. Innovation was slowed due to the Plague, which destroyed lives and property values, but clockwork had spread throughout Europe. The Fall of Constantinople to the Ottomons in 1453 sends a wave of Greek Scholars away from the Ottoman Empire and throughout Europe. Ancient knowledge, enriched with a thousand years of Islamic insight was about to meet a new level of precision metalwork that had been growing in Europe.

By 1495, Leonardo da Vinci showed off one of the first robots in the world -  a knight that could sit, stand, open its visor independently. He also made a robotic lion and repeated experiments from antiquity on self driving carts. And we see a lot of toys following the mechanical innovations throughout the world. Because parents. 

We think of the Renaissance as coming out of Italy but scholars had been back at it throughout Europe since the High Middle Ages. By 1490, a locksmith named Peter Hele is credited for developing the first mainspring in Nurnburg. This is pretty important for watches. You see, up to this point nearly every clockwork we’ve discussed was powered by water or humans setting a dial or fire or some other force. The mainspring stores energy as a small piece of metal ribbon is twisted around an axle, called an abror, into a spiral and then wound tighter and tighter, thus winding a watch. 

The mainspring drove a gear train of increasingly smaller gears which then sent energy into the escapement but without a balance wheel those would not be terribly accurate just yet. But we weren’t powering clocks with water.

At this point, clocks started to spread as expensive decorations, appearing on fireplace mantles and on tables of the wealthy. These were not small by any means. But Peter Henlein would get the credit in 1510 for the first real watch, small enough to be worn as a necklace.

By 1540, screws were small enough to be used in clocks allowing them to get even smaller. The metals for gears were cut thinner, clock makers and toy makers were springing up all over the world. And money coming from speculative investments in the New World was starting to flow, giving way to fuel even more investment into technology.

Jost Burgi invented the minute hand in 1577. But as we see with a few disciplines he decided to jump into, Galileo Galilei has a profound impact on clocks. Galileo documents the physics of the pendulum in 1581 and the center of watchmaking would move to Geneva later in that decade. Smaller clockworks spread with wheels and springs but the 1600s would see an explosion in hundreds of different types of escapements and types of gearing.  He designed an escapement for a pendulum clock but died before building it. 

1610 watches got glass to protect the dials and 1635 French inventor Paul Viet Blois added enamel to the dials. Meanwhile, Blaise Pascal developed the Pascaline in 1642, giving the world the adding machine.

But it took another real scientist to pick up Galileo’s work and put it into action to propel clocks forward. To get back to where we started, a golden age of clockwork was just getting underway. In 1657 Huygens created a clock driven by the pendulum, which by 1671 would see William Clement add the suspension spring and by 1675 Huygens would give us the balance wheel, mimicking the back and forth motion of Galileo’s pendulum. The hairspring, or balance spring, then controlled the speed making it smooth and more accurate. And the next year, we got the concentric minute hand.

I guess Robert Hooke gets credit for the anchor escapement, but the verge escapement had been in use for awhile by then. So who gets to claim inventing some of these devices is debatable. Leibniz then added a stepped reckoner to the mechanical calculator in 1672 going from adding and subtracting to multiplication and division. Still calculating and not really computing as we’d think of it today.

At this point we see a flurry of activity in a proton-industrial revolution. Descartes puts forth that bodies are similar to complex machines and that various organs, muscles, and bones could be replaced with gearing similar to how we can have a hip or heart replaced today. Consider this a precursor to cybernetics. We see even more mechanical toys for the rich - but labor was still cheap enough that automation wasn’t spreading faster. 

And so we come back to the growing British empire. They had colonized North America and the empire had grown wealthy. They controlled India, Egypt, Ireland, the Sudan, Nigeria, Sierra Leone, Kenya, Cyprus, Hong Kong, Burma, Australia, Canada, and so much more. And knowing the exact time was critical for a maritime empire because we wouldn’t get radar until World War II. 

There were clocks but still, the clocks built had to be corrected at various times, based on a sundial. This is because we hadn’t yet gotten to the levels of constant power and precise gearing and the ocean tended to mess with devices. The growing British Empire needed more reliable ways than those Ptolemy used to tell time. And so England would offer prizes ranging from 10,000 to 20,000 pounds for more accurate ways to keep time in the Maritime Act in 1714. Crowdsourcing.

It took until the 1720s. George Graham, yet another member of the Royal Society, picked up where Thomas Tompion left off and added a cylinder escapement to watches and then the deadbeat escapement. He chose not to file patents for these so all watch makers could use them. He also added mercurial compensation to pendulum clocks. And John Harrison added the grid-iron compensation pendulum for his H1 marine chronometer. And George Graham added the cylinder escapement. 

1737 or 1738 sees another mechanical robot, but this time Jacques de Vaucanson brings us a duck that can eat, drink, and poop. But that type of toy was a one-off. Swiss Jaquet-Droz built automated dolls that were meant to help sell more watches, but here we see complex toys that make music (without a water whistle) and can even write using programmable text. The toys still work today and I feel lucky to have gotten to see them at the Museum of Art History in Switzerland. Frederick the Great became entranced by clockwork automations. Magicians started to embrace automations for more fantastical sets. 

At this point, our brave steampunks made other automations and their automata got cheaper as the supply increased. By the 1760s Pierre Le Roy and Thomas Earnshaw invented the temperature compensated balance wheel. Around this time, the mainspring was moved into a going barrel so watches could continue to run while the mainspring was being wound. Many of these increasingly complicated components required a deep understanding of the math about the simple machine going back to Archimedes but with all of the discoveries made in the 2,000 years since. 

And so in 1785 Josiah Emery made the lever escapement standard. The mechanical watch fundamentals haven’t changed a ton in the past couple hundred years (we’ll not worry about quartz watches here). But the 1800s saw an explosion in new mechanical toys using some of the technology invented for clocks. Time brings the cost of technology down so we can mass produce trinkets to keep the kiddos busy.  This is really a golden age of dancing toys, trains, mechanical banks, and eventually bringing in spring-driven wind-up toys. 

Another thing happened in the 1800s. With all of this knowhow on building automations, and all of this scientific inquiry requiring increasingly complicated mathematics, Charles Babbage started working on the Difference Engine in 1822 and then the Analytical Engine in 1837, bringing in the idea of a Jacquard loom punched card. The Babbage machines would become the precursor of modern computers, and while they would have worked if built to spec, were not able to be run in his lifetime. 

Over the next few generations, we would see his dream turn into reality and the electronic clock from Frank Hope-Jones in 1895. There would be other innovations such as in 1945 when the National Institute of Standards and technology created the first atomic clock. But in general parts got smaller, gearing more precise, and devices more functional. We’d see fits and starts for mechanical computers, with Percy Ludgate’s Analytical Machine in 1909, the Marchant Calculator in 1918, the electromechanical Enigma in the 1920s, the Polish Enigma double in 1932, the Z1 from Konrad Zuse in 1938, and the Mark 1 Fire Control Computer for the US Navy in the World War II era, when computers went electro-mechanical and electric, effectively ending the era of clockwork-driven machinations out of necessity, instead putting that into what I consider fun tinkerations.

Aristotle dreamed of automatic looms freeing humans from the trappings of repetitive manual labors so we could think. A Frenchman built them. Long before Aristotle, Pre-Socratic Greek legends told of statues coming to life, fire breathing statues, and tables moving themselves. Egyptian statues were also known to have come to life to awe and inspire the people. The philosophers of the Thales era sent Pythagoras and others to Egypt where he studied with Egyptian priests. Why priests? They led ascetic lives, often dedicated to a branch of math or science. And that’s in the 6th century BCE. The Odyssey was written about events from the 8th century BCE. 

We’ve seen time and time again in the evolutions of science that we often understood how to do something before we understood why. The legendary King Solomon and King Mu of the Zhao dynasty are said to have automata, or clockwork, or moving statues, or to have been presented with these kinds of gifts, going back thousands of years. And there is the chance that they were. Since then, we’ve seen a steady advent of this back and forth between engineering and science. 

Sometimes, we understand how to do something through trial and error or random discovery. And then we add the math and science to catch up to it. Once we do understand the science behind a discovery we uncover better ways and that opens up more discoveries. Aristotle’s dream was realized and extended to the point we can now close the blinds, lock the doors, control the lights, build cars, and even now print cars. We mastered time in multiple dimensions, including Newton’s relative time. We mastered mechanics and then the electron and managed to merge the two. We learned to master space, mapping them to celestial bodies. We mastered mechanics and the math behind it.

Which brings us to today. What do you have to do manually? What industries are still run by manual labor? How can we apply complex machines or enrich what those can do with electronics in order to free our fellow humans to think more? How can we make Aristotle proud? One way is to challenge and prove or disprove any of his doctrines in new and exciting ways. Like Newton and then Einstein did. We each have so much to give. I look forward to seeing or hearing about your contributions when its time to write their histories!

Connections: ARPA > RISC > ARM > Apple's M1


Let’s oversimplify something in the computing world. Which is what you have to do when writing about history. You have to put your blinders on so you can get to the heart of a given topic without overcomplicating the story being told. And in the evolution of technology we can’t mention all of the advances that lead to each subsequent evolution. It’s wonderful and frustrating all at the same time. And that value judgement of what goes in and what doesn’t can be tough. 

Let’s start with the fact that there are two main types of processors in our devices. There’s the x86 chipset developed by Intel and AMD and then there’s the RISC-based processors, which are ARM and for the old school people, also include PowerPC and SPARC. Today we’re going to set aside the x86 chipset that was dominant for so long and focus on how the RISC and so ARM family emerged.   

First, let’s think about what the main difference is between ARM and x86. RISC and so ARM chips have a focus on reducing the number of instructions required to perform a task to as few as possible, and so RISC stands for Reduced Instruction Set Computing. Intel, other than the Atom series chips, with the x86 chips has focused on high performance and high throughput. Big and fast, no matter how much power and cooling is necessary. 

The ARM processor requires simpler instructions which means there’s less logic and so more instructions are required to perform certain logical operations. This increases memory and can increase the amount of time to complete an execution, which ARM developers address with techniques like pipelining, or instruction-level parallelism on a processor. Seymour Cray came up with this to split up instructions so each core or processor handles a different one and so Star, Amdahl and then ARM implemented it as well. 

The X86 chips are Complex Instruction Set Computing chips, or CISC. Those will do larger, more complicated tasks, like computing floating point integers or memory searches, on the chip. That often requires more consistent and larger amounts of power.

ARM chips are built for low power. The reduced complexity of operations is one reason but also it’s in the design philosophy. This means less heat syncs and often accounting for less consistent streams of power. This 130 watt x86 vs 5 watt ARM can mean slightly lower clock speeds but the chips can cost more as people will spend less in heat syncs and power supplies. This also makes the ARM excellent for mobile devices. 

The inexpensive MOS 6502 chips helped revolutionize the personal computing industry in 1975, finding their way into the Apple II and a number of early computers. They were RISC-like but CISC-like as well. They took some of the instruction set architecture family from the IBM System/360 through to the PDP, General Nova, Intel 8080, Zylog, and so after the emergence of Windows, the Intel finally captured the personal computing market and the x86 flourished. 

But the RISC architecture actually goes back to the ACE, developed in 1946 by Alan Turing. It wasn’t until the 1970s that Carver Mead from Caltech and Lynn Conway from Xerox PARC saw that the number of transistors was going to plateau on chips while workloads on chips were growing exponentially. ARPA and other agencies needed more and more instructions, so they instigated what we now refer to as the VLSI project, a DARPA program initiated by Bob Kahn to push into the 32-bit world. They would provide funding to different universities, including Stanford and the University of North Carolina. 

Out of those projects, we saw the Geometry Engine, which led to a number of computer aided design, or CAD efforts, to aid in chip design. Those workstations, when linked together, evolved into tools used on the Stanford University Network, or SUN, which would effectively spin out of Stanford as Sun Microsystems. And across the bay at Berkeley we got a standardized Unix implementation that could use the tools being developed in Berkely Software Distribution, or BSD, which would eventually become the operating system used by Sun, SGI, and now OpenBSD and other variants. 

And the efforts from the VLSI project led to Berkely RISC in 1980 and Stanford MIPS as well as the multi chip wafer.The leader of that Berkeley RISC project was David Patterson who still serves as vice chair of the RISC-V Foundation. The chips would add more and more registers but with less specializations. This led to the need for more memory. But UC Berkeley students shipped a faster ship than was otherwise on the market in 1981. And the RISC II was usually double or triple the speed of the Motorola 68000. 

That led to the Sun SPARC and DEC Alpha. There was another company paying attention to what was happening in the RISC project: Acorn Computers. They had been looking into using the 6502 processor until they came across the scholarly works coming out of Berkeley about their RISC project. Sophie Wilson and Steve Furber from Acorn then got to work building an instruction set for the Acorn RISC Machine, or ARM for short. They had the first ARM working by 1985, which they used to build the Acorn Archimedes.

The ARM2 would be faster than the Intel 80286 and by 1990, Apple was looking for a chip for the Apple Newton. A new company called Advanced RISC Machines or Arm would be founded, and from there they grew, with Apple being a shareholder through the 90s. By 1992, they were up to the ARM6 and the ARM610 was used for the Newton. DEC licensed the ARM architecture to develop the StrongARMSelling chips to other companies. Acorn would be broken up in 1998 and parts sold off, but ARM would live on until acquired by Softbank for $32 billion in 2016. Softbank is  currently in acquisition talks to sell ARM to Nvidia for $40 billion. 

Meanwhile, John Cocke at IBM had been working on the RISC concepts since 1975 for embedded systems and by 1982 moved on to start developing their own 32-bit RISC chips. This led to the POWER instruction set which they shipped in 1990 as the RISC System/6000, or as we called them at the time, the RS/6000. They scaled that down to the Power PC and in 1991 forged an alliance with Motorola and Apple. DEC designed the Alpha. It seemed as though the computer industry was Microsoft and Intel vs the rest of the world, using a RISC architecture.

But by 2004 the alliance between Apple, Motorola, and IBM began to unravel and by 2006 Apple moved the Mac to an Intel processor. But something was changing in computing. Apple shipped the iPod back in 2001, effectively ushering in the era of mobile devices. By 2007, Apple released the first iPhone, which shipped with a Samsung ARM. 

You see, the interesting thing about ARM is they don’t fab chips, like Intel - they license technology and designs. Apple licensed the Cortex-A8 from ARM for the iPhone 3GS by 2009 but had an ambitious lineup of tablets and phones in the pipeline. And so in 2010 did something new: they made their own system on a chip, or SoC. Continuing to license some ARM technology, Apple pushed on, getting between 800MHz to 1 GHz out of the chip and using it to power the iPhone 4, the first iPad, and the long overdue second-generation Apple TV. The next year came the A5, used in the iPad 2 and first iPad Mini, then the A6 at 1.3 GHz for the iPhone 5, the A7 for the iPhone 5s, iPad Air. That was the first 64-bit consumer SoC.

In 2014, Apple released the A8 processor for the iPhone 6, which came in speeds ranging from 1.1GHz to the 1.5 GHz chip in the 4th generation Apple TV. By 2015, Apple was up to the A9, which clocked in at 1.85 GHz for the iPhone 6s. Then we got the A10 in 2016, the A11 in 2017, the A12 in 2018, A13 in 2019, A14 in 2020 with neural engines, 4 GPUs, and 11.8 billion transistors compared to the 30,000 in the original ARM. 

And it’s not just Apple. Samsung has been on a similar tear, firing up the Exynos line in 2011 and continuing to license the ARM up to Cortex-A55 with similar features to the Apple chips, namely used on the Samsung Galaxy A21. And the Snapdragon. And the Broadcoms. 

In fact, the Broadcom SoC was used in the Raspberry Pi (developed in association with Broadcom) in 2012. The 5 models of the Pi helped bring on a mobile and IoT revolution. 

And so nearly every mobile device now ships with an ARM chip as do many a device we place around our homes so our digital assistants can help run our lives. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. And the number is about to grow even more rapidly. Apple surprised many by announcing they were leaving Intel to design their own chips for the Mac. 

Given that the PowerPC chips were RISC, the ARM chips in the mobile devices are RISC, and the history Apple has with the platform, it’s no surprise that Apple is going back that direction with the M1, Apple’s first system on a chip for a Mac. And the new MacBook Pro screams. Even software running in Rosetta 2 on my M1 MacBook is faster than on my Intel MacBook. And at 16 billion transistors, with an 8 core GPU and a 16 core neural engine, I’m sure developers are hard at work developing the M3 on these new devices (since you know, I assume the M2 is done by now). What’s crazy is, I haven’t felt like Intel had a competitor other than AMD in the CPU space since Apple switched from the PowerPC. Actually, those weren’t great days. I haven’t felt that way since I realized no one but me had a DEC Alpha or when I took the SPARC off my desk so I could play Civilization finally. 

And this revolution has been a constant stream of evolutions, 40 years in the making. It started with an ARPA grant, but various evolutions from there died out. And so really, it all started with Sophie Wilson. She helped give us the BBC Micro and the ARM. She was part of the move to Element 14 from Acorn Computers and then ended up at Broadcom when they bought the company in 2000 and continues to act as the Director of IC Design. We can definitely thank ARPA for sprinkling funds around prominent universities to get us past 10,000 transistors on a chip. Given that chips continue to proceed at such a lightning pace, I can’t imagine where we’ll be at in another 40 years. But we owe her (and her coworkers at Acorn and the team at VLSI, now NXP Semiconductors) for their hard work and innovations.

Bob Tayler: ARPA to PARC to DEC


Robert Taylor was one of the true pioneers in computer science. In many ways, he is the string (or glue) that connected the US governments era of supporting computer science through ARPA to innovations that came out of Xerox PARC and then to the work done at Digital Equipment Corporation’s Systems Research Center. Those are three critical aspects of the history of computing and while Taylor didn’t write any of the innovative code or develop any of the tools that came out of those three research environments, he saw people and projects worth funding and made sure the brilliant scientists got what they needed to get things done.

The 31 years in computing that his stops represented were some of the most formative years for the young computing industry and his ability to inspire the advances that began with Vannevar Bush’s 1945 article called “As We May Think” then ended with the explosion of the Internet across personal computers. 

Bob Taylor inherited a world where computing was waking up to large crusty but finally fully digitized mainframes stuck to its eyes in the morning and went to bed the year Corel bought WordPerfect because PCs needed applications, the year the Pentium 200 MHz was released, the year Palm Pilot and eBay were founded, the year AOL started to show articles from the New York Times, the year IBM opened a we web shopping mall and the year the Internet reached 36 million people. Excite and Yahoo went public. Sometimes big, sometimes small, all of these can be traced back to Bob Taylor - kinda’ how we can trace all actors to Kevin Bacon. But more like if Kevin Bacon found talent and helped them get started, by paying them during the early years of their careers… 

How did Taylor end up as the glue for the young and budding computing research industry? Going from tween to teenager during World War II, he went to Southern Methodist University in 1948, when he was 16. He jumped into the US Naval Reserves during the Korean War and then got his masters in psychology at the University of Texas at Austin using the GI Bill. Many of those pioneers in computing in the 60s went to school on the GI Bill. It was a big deal across every aspect of American life at the time - paving the way to home ownership, college educations, and new careers in the trades. From there, he bounced around, taking classes in whatever interested him, before taking a job at Martin Marietta, helping design the MGM-31 Pershing and ended up at NASA where he discovered the emerging computer industry. 

Taylor was working on projects for the Apollo program when he met JCR Licklider, known as the Johnny Appleseed of computing. Lick, as his friends called him, had written an article called Man-Computer Symbiosis in 1960 and had laid out a plan for computing that influenced many. One such person, was Taylor. And so it was in 1962 he began and in 1965 that he succeeded in recruiting Taylor away from NASA to take his place running ARPAs Information Processing Techniques Office, or IPTO. 

Taylor had funded Douglas Engelbart’s research on computer interactivity at Stanford Research Institute while at NASA. He continued to do so when he got to ARPA and that project resulted in the invention of the computer mouse and the Mother of All Demos, one of the most inspirational moments and a turning point in the history of computing. 

They also funded a project to develop an operating system called Multics. This would be a two million dollar project run by General Electric, MIT, and Bell Labs. Run through Project MAC at MIT there were just too many cooks in the kitchen. Later, some of those Bell Labs cats would just do their own thing. Ken Thompson had worked on Multics and took the best and worst into account when he wrote the first lines of Unix and the B programming language, then one of the most important languages of all time, C. 

Interactive graphical computing and operating systems were great but IPTO, and so Bob Taylor and team, would fund straight out of the pentagon, the ability for one computer to process information on another computer. Which is to say they wanted to network computers. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they’d awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS would connect a number of sites and route traffic and the first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET, the commonly accepted precursor to the Internet. 

There was another networking project going on at the time that was also getting funding from ARPA as well as the Air Force, PLATO out of the University of Illinois. PLATO was meant for teaching and had begun in 1960, but by then they were on version IV, running on a CDC Cyber and the time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. 

Then things get weird. Taylor is sent to Vietnam as a civilian, although his rank equivalent would be a brigadier general. He helped develop the Military Assistance Command in Vietnam. Battlefield operations and reporting were entering the computing era. Only problem is, while Taylor was a war veteran and had been deep in the defense research industry for his entire career, Vietnam was an incredibly unpopular war and seeing it first hand and getting pulled into the theater of war, had him ready to leave. This combined with interpersonal problems with Larry Roberts who was running the ARPA project by then over Taylor being his boss even without a PhD or direct research experience. And so Taylor joined a project ARPA had funded at the University of Utah and left ARPA. 

There, he worked with Ivan Sutherland, who wrote Sketchpad and is known as the Father of Computer Graphics, until he got another offer. This time, from Xerox to go to their new Palo Alto Research Center, or PARC. One rising star in the computer research world was pretty against the idea of a centralized mainframe driven time sharing system. This was Alan Kay. In many ways, Kay was like Lick. And unlike the time sharing projects of the day, the Licklider and Kay inspiration was for dedicated cycles on processors. This meant personal computers. 

The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Taylor was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS. 

This new Computer Science Laboratory landed people like Charles Thacker, David Boggs, Butler Lampson, and Bob Sproul and would develop the Xerox Alto, the inspiration for the Macintosh. The Alto though contributed the very ideas of overlapping windows, icons, menus, cut and paste, word processing. In fact, Charles Simonyi from PARC would work on Bravo before moving to Microsoft to spearhead Microsoft Word.

Bob Metcalfe on that team was instrumental in developing Ethernet so workstations could communicate with ARPANET all over the growing campus-connected environments. Metcalfe would leave to form 3COM. 

SuperPaint would be developed there and Alvy Ray Smith would go on to co-found Pixar, continuing the work begun by Richard Shoup. 

They developed the Laser Printer, some of the ideas that ended up in TCP/IP, and the their research into page layout languages would end up with Chuck Geschke, John Warnock and others founding Adobe. 

Kay would bring us the philosophy behind the DynaBook which decades later would effectively become the iPad. He would also develop Smalltalk with Dan Ingalls and Adele Goldberg, ushering in the era of object oriented programming. 

They would do pioneering work on VLSI semiconductors, ubiquitous computing, and anything else to prepare the world to mass produce the technologies that ARPA had been spearheading for all those years. Xerox famously did not mass produce those technologies. And nor could they have cornered the market on all of them. The coming waves were far too big for one company alone. 

And so it was that PARC, unable to bring the future to the masses fast enough to impact earnings per share, got a new director in 1983 and William Spencer was yet another of three bosses that Taylor clashed with. Some resented that he didn’t have a PhD in a world where everyone else did. Others resented the close relationship he maintained with the teams. Either way, Taylor left PARC in 1983 and many of the scientists left with him. 

It’s both a curse and a blessing to learn more and more about our heroes. Taylor was one of the finest minds in the history of computing. His tenure at PARC certainly saw the a lot of innovation and one of the most innovative teams to have ever been assembled. But as many of us that have been put into a position of leadership, it’s easy to get caught up in the politics. I am ashamed every time I look back and see examples of building political capital at the expense of a project or letting an interpersonal problem get in the way of the greater good for a team. But also, we’re all human and the people that I’ve interviewed seem to match the accounts I’ve read in other books. 

And so Taylor’s final stop was Digital Equipment Corporation where he was hired to form their Systems Research Center in Palo Alto. They brought us the AltaVista search engine, the Firefly computer, Modula-3 and a few other advances. Taylor retired in 1996 and DEC was acquired by Compaq in 1998 and when they were acquired by HP the SRC would get merged with other labs at HP. 

From ARPA to Xerox to Digital, Bob Taylor certainly left his mark on computing. He had a knack of seeing the forest through the trees and inspired engineering feats the world is still wrestling with how to bring to fruition. Raw, pure science. He died in 2017. He worked with some of the most brilliant people in the world at ARPA. He inspired passion, and sometimes drama in what Stanford’s Donald Knuth called “the greatest by far team of computer scientists assembled in one organization.” 

In his final email to his friends and former coworkers, he said “You did what they said could not be done, you created things that they could not see or imagine.” The Internet, the Personal Computer, the tech that would go on to become Microsoft Office, object oriented programming, laser printers, tablets, ubiquitous computing devices. So, he isn’t exactly understating what they accomplished in a false sense of humility. I guess you can’t do that often if you’re going to inspire the way he did. 

So feel free to abandon the pretense as well, and go inspire some innovation. Heck, who knows where the next wave will come from. But if we aren’t working on it, it certainly won’t come.

Thank you so much and have a lovely, lovely day. We are so lucky to have you join us on yet another episode. 



We’ve covered Xerox PARC a few times - and one aspect that’s come up has been the development of the Bravo word processor from Butler Lampson, Charles Simonyi, and team. Simonyi went on to work at Microsoft and spearheaded the development of Microsoft Word. But Bravo was the first WYSIWYG tool for creating documents, which we now refer to as a word processor. That was 1974. 

Something else we’ve covered happened in 1974, the release of the Altair 8800. One aspect of the Altair we didn’t cover is that Michael Shrayer was a tinkerer who bought an Alatir and wrote a program that allowed him to write manuals. This became the Electric Pencil. It was text based though and not a WYSIWYG like Bravo was. It ran in 8k of memory and would be ported to Intel 8080, Zylog Z-80, and other processors over the years leading into the 80s. But let’s step back to the 70s for a bit. Because bell bottoms. 

The Altair inspired a clone called the IMSAI 8080 in 1975. The direct of marketing, Seymour Rubenstein started tinkering with the idea of a word processor. He left IMSAI and by 1978, put together $8,500 and started a company called MicroPro International. He convinced Rob Barnaby, the head programmer at IMSAI, to join him.

They did market research into the tools being used by IBM and Xerox. They made a list of what was needed and got to work. The word processor grew. They released their word processor, which they called WordStar, for CP/M running on the Intel 8080. By then it was 1979 and CP/M was a couple years old but already a pretty dominant operating system for microcomputers. Software was a bit more expensive at the time and WordStar sold for $495.

At the time, you had to port your software to each OS running on each hardware build. And the code was in assembly so not the easiest thing in the world. This meant they wanted to keep the feature set slim so WordStar could run on as many platforms as possible. They ran on the Osborne 1 portable and with CP/M support they became the standard. They could wrap words automatically to the next line.  Imagine that. 

They ported the software to other platforms. It was clear there was a new OS that they needed to run on. So they brought in Jim Fox, who ported WordStar to run on DOS in 1981. They were on top of the world. Sure, there was Apple Write, Word, WordPerfect, and Samna, but WordStar was it.

Arthur C Clarke met Rubenstein and Barnaby and said they "made me a born-again writer, having announced my retirement in 1978, I now have six books in the works, all through WordStar." He would actually write dozens more works. 

They released the third version in 1982 and quickly grew into the most popular, dominant word processor on the market. The code base was getting a little stale and so they brought in Peter Mierau to overhaul it for WordStar 4. The refactor didn’t come at the best of times. In software, you’re the market leader until… You thought I was going to say Microsoft moved into town? Nope, although Word would eventually dominate word processing. But there was one more step before computing got there. 

Next, along with the release of the IBM PC, WordPerfect took the market by storm. They had more features and while WordStar was popular, it was the most pirated piece of software at the time. This meant less money to build features. Like using the MS-DOS keyboard to provide more productivity tools. This isn’t to say they weren’t making money. They’d grown to $72M in revenue by 1984. When they filed for their initial public offering, or IPO, they had a huge share of the word processing market and accounted for one out of every ten dollars spent on software. 

WordStar 5 came in 1989 and as we moved into the 90s, it was clear that WordStar 2000 had gone nowhere so WordStar 6 shipped in 1990 and 7 in 1991. The buying tornado had slowed and while revenues were great, copy-protecting disks were slowing the spread of the software. 

Rubinstein is commonly credited with creating the first end-user software licensing agreement, common with nearly every piece of proprietary software today. Everyone was pirating back then so if you couldn’t use WordStar, move on to something you could steal. You know, like WordPerfect. MultiMate, AmiPro, Word, and so many other tools. Sales were falling. New features weren’t shipping. 

One pretty big one was support for Windows. By the time Windows support shipped, Microsoft had released Word, which had a solid two years to become the new de facto standard. SoftKey would acquire the company in 1994, and go on to acquire a number of other companies until 2002 when they were acquired. But by then WordStar was so far forgotten that no one was sure who actually owned the WordStar brand. 

I can still remember using WordStar. And I remember doing work when I was a consultant for a couple of authors to help them recover documents, which were pure ASCII files or computers that had files in WordStar originally but moved to the WSD extension later. And I can remember actually restoring a BAK file while working at the computer labs at the University of Georgia, common in the DOS days. It was a joy to use until I realized there was something better.

Rubinstein went on to buy another piece of software, a spreadsheet. He worked with another team, got a little help from Barnaby and and Fox and eventually called it Surpass, which was acquired by Borland, who would rename it to Quattro Pro. That spreadsheet borrowed the concept of multiple sheets in tabs from Boeing Calc, now a standard metaphor. Amidst lawsuits with Lotus on whether you could patent how software functions, or the UX of software, Borland sold Lotus to Novell during a time when Novell was building a suite of products to compete with Microsoft.

We can thank WordStar for so much. Inspiring content creators and creative new features for word processing. But we also have to remember that early successes are always going to inspire additional competition. Any company that grows large enough to file an initial public offering is going to face barbarian software vendors at their gates. When those vendors have no technical debt, they can out-deliver features. But as many a software company has learned, expanding to additional products by becoming a portfolio company is one buffer for this. As is excellent execution. 

The market was WordStar’s to lose. And there’s a chance that it was lost the second Microsoft pulled in Charles Simonyi, one of the original visionaries behind Bravo from Xerox PARC. But when you have 10% of all PC software sales it seems like maybe you got outmaneuvered in the market. But ultimately the industry was so small and so rapidly changing in the early 1980s that it was ripe for disruption on an almost annual basis. That is, until Microsoft slowly took the operating system and productivity suite markets and .doc, .xls, and .ppt files became the format all other programs needed to support. 

And we can thank Rubinstein and team for pioneering what we now call the software industry. He started on an IBM 1620 and ended his career with WebSleuth, helping to usher in the search engine era. Many of the practices he put in place to promote WordStar are now common in the industry. These days I talk to a dozen serial entrepreneurs a week. They could all wish to some day be as influential as he. 

The Immutable Laws of Game Mechanics In A Microtransaction-Based Economy


Once upon a time, we put a quarter in a machine and played a game for awhile. And life was good. The rise of personal computers and subsequent fall in the cost of microchips allowed some of the same chips found in early computers, such as the Zylog Z80, to bring video game consoles into homes across the world. That one chip could be found in the ColecoVision, Nintendo Game Boy, and the Sega Genesis. Given that many of the cheaper early computers came with joysticks or gaming at the time, the line between personal computer and video game console seemed natural. 

Then came the iPhone, which brought an explosion of apps. Apps were anywhere from a buck to a hundred. We weren't the least surprised by the number of games that exploded onto the platform. Nor by the creativity of the developers. When the Apple App Store and Google Play added in-app purchasing and later in-app subscriptions it all just seemed natural. But it has profoundly changed the way games are purchased, distributed, and the entire business model of apps. 

The Evolving Business Model of Gaming

Video games were originally played in arcades, similar to pinball. The business model was each game was a quarter or token. With the advent of PCs and video game consoles, games were bought in stores, as were records or cassettes that included music. The business model was that the store made money (40-50%), the distributor who got the game into a box and on the shelf in the store made money, and the company that made the game got some as well. And discounts to sell more inventory usually came out of someone not called the retailer. By the time everyone involved got a piece, it was common for the maker of the game to get between $5 and $10 dollars per unit sold for a $50 game. 

No one was surprised that there was a whole cottage industry of software piracy. Especially given that most games could be defeated in 40 to 100 hours. This of course spawned a whole industry to thwart piracy, eating into margins but theoretically generating more revenue per game created. 

Industries evolve. Console and computer gaming split (although arguably consoles have always just been computers) and the gamer-verse further schism'd between those who played various types of games. Some games were able to move to subscription models and some companies sprang up to deliver games through subscriptions or as rentals  (game rentals over a modem was the business model that originally inspired the AOL founders). And that was ok for the gaming industry, which slowly grew to the point that gaming was a larger industry than the film industry.

Enter Mobile Devices and App Stores

Then came mobile devices, disrupting the entire gaming industry. Apple began the App Store model, establishing that the developer got 70% of the sale - much better than 5%. Steve Jobs had predicted the coming App Store in a 1985 and then when the iPhone was released tried to keep the platform closed but eventually capitulated and opened up the App Store to developers. 

Those first developers made millions. Some developers were able to port games to mobile platforms and try to maintain a similar pricing model to the computer or console versions. But the number of games created a downward pressure that kept games cheap, and often free. 

The number of games in the App Store grew (today there are over 5 million apps between Apple and Google). With a constant downward pressure on price, the profits dropped. Suddenly, game developers forgot they used to get 10 percent of the sale of a game a lot of times and started to blame the stores the games were distributed in on the companies that owned the App Stores: Apple, Google, and in some cases, Steam. 

The rise and subsequent decrease in popularity of Pokémon Go was the original inspiration for this article in 2016 but since a number of games have validated the perspectives. These free games provide a valuable case study into how the way we design a game to be played (known as game mechanics) impacts our ability to monetize the game in various ways. And there are lots and lots of bad examples in games (and probably legislation on the way to remedy abuses) that also tells us what not to do.

The Microtransaction-Based Economy

These days, game developers get us hooked on the game early, get us comfortable with the pace of the game and give us an early acceleration. But then that slows down. Many a developer then points us to in-app purchases in order to unlock items that allow us to maintain the pace of a game, or even to hasten the pace. And given that we're playing against other people a lot of the time, they try and harness our natural competitiveness to get us to buy things. These in-app purchases are known as microtransactions. And the aggregate of these in-app purchases can be considered as a microtransaction-based economy.

As the microtransaction-based economy has arrived in full force, there are certain standards emerging as cultural norms for these economies. And violating these rules cause vendors to get blasted on message boards and more importantly lose rabid fans of the game. As such, I’ve decided to codify my own set of laws for these, which are follows:

All items that can be purchased with real money should be available for free. 

For example, when designing a game that has users building a city and we develop a monument that users can pay $1 for and place in their city to improve morale of those that live in the city, that monument should be able to be earned in the game as well. Otherwise, you’re able to pay for an in-app purchase that gives some players an advantage for doing nothing more than spending money. 

In-app purchases do not replace game play, but hasten the progression through the game. 

For example, when designing a game that has users level up based on earning experience points for each task they complete, we never want to just gift experience points based on an in-app purchase. Instead, in-app purchases should provide a time-bound amplification to experience (such as doubling experience for 30 minutes in Pokémon Go or keeping anyone else from attacking a player for 24 hours in Clash of Clans so we can save enough money to buy that one Town Hall upgrade we just can’t live without). 

The amount paid for items in a game should correlate to the amount of time saved in game play. 

For example, get stuck on a level in Angry Birds. We could pay a dollar for a pack of goodies that will get us past that level (and probably 3 more), so we can move on. Or we could keep hammering away at that level for another hour. Thus, we saved an hour, but lost pride points in the fact that we didn’t conquer that level. Later in the game, we can go back and get three stars without paying to get past it. 

Do not allow real-world trading. 

This is key. If it’s possible to build an economy outside the game, players can then break your game mechanics. For example, in World of Warcraft, you can buy gold, and magic items online for real money and then log into the game only to have another shady character add those items to your inventory. This leads to people writing programs known as bots (short for robots) to mine gold or find magic items on their behalf so they can sell it in the real world. There are a lot of negative effects to such behavior, including the need to constantly monitor for bots (which wastes a lot of developer cycles), bots cause the in-game economy to practically crash when the game updates (e.g. a map) and breaks the bots, and make games both more confusing for users and less controllable by the developer.

Establish an in-game currency.

 You don’t want users of the game buying things with cash directly. Instead, you want them to buy a currency, such as gold, rubies, gems, karma, or whatever you’d like to call that currency. Disassociating purchases from real world money causes users to lose track of what they’re buying and spend more money. Seems shady, and it very well may be, but I don’t write games so I can’t say if that’s the intent or not. It’s a similar philosophy to buying poker chips, rather than using money in a casino (just without the free booze).

Provide multiple goals within the game.

Players will invariably get bored with the critical path in your game. When they do, it’s great for players to find other aspects of the game to keep them engaged. For example, in Pokémon Go, you might spend 2 weeks trying to move from level 33 to level 34. During that time, you might as well go find that last Charmander so you can evolve to a Charzard. That’s two different goals: one to locate a creature, the other to gain experience. Or you can go take over some gyms in your neighborhood. Or you can power level by catching hundreds of Pidgeys. The point is, to keep players engaged during long periods with no progression, having a choose your own adventure style game play is important. For massive multiplayers (especially role playing games) this is critical, as players will quickly tire of mining for gold and want to go, for example, jump into the latest mass land war. To place a little context around this, there are also 28 medals in Pokémon Go (that I’m aware of), which keep providing more and more goals in the game. 

Allow for rapid progression early in the game in order to hook users, so they will pay for items later in the game.

We want people to play our games because they love them. Less than 3% of players will transact an in-app purchase in a given game. But that number skyrockets as time is invested in a game. Quickly progressing through levels early in a game keeps users playing. Once users have played a game for 8 or 9 hours, if you tell them they can go to bed and for a dollar and it will seem like they kept playing for another 8 or 9 hours, based on the cool stuff they’ll earn, they’re likely to give up that dollar and keep playing for another couple of hours rather than get that much needed sleep! We should never penalize players that don't pay up. In fact, players often buy things that simply change the look of their character in games like Among Us. There is no need to impact game mechanics with purchase if we build an awesome enough game.  

Create achievable goals in discrete amounts of time. 

Boom Beach villages range from level 1 to level 64. As players rise through the ability to reach the next stage becomes logarithmically more difficult given other players are paying to play. Goals against computers players (or NPCs or AI according to how we want to think of it) are similar. All should be achievable though. The game Runeblade for the Apple Watch was based on fundamentally sound game mechanics that could enthrall a player for months; however, there’s no way to get past a certain point. Therefore, players lose interest, Eric Cartman-style, and went home.

Restrict the ability to automate the game.

If we had the choice to run every day to lose weight or to eat donuts and watch people run and still lose weight, which would most people choose? Duh. Problem is that when players automate your game, they end up losing interest as their time investment in the game diminishes, as does the necessary skill level to shoot up through levels in games. Evony Online was such a game; and I’m pretty sure I still get an email every month chastising me for botting the game 8-10 years after anyone remembers that the game existed. As a game becomes too dependent on resources obtained by gold mining bots in World of Warcraft, the economy of the game could crash when they were knocked off-line. Having said this, such drama adds to the intrigue - which can be a game inside a game for many. 

Pit players against one another.

Leaderboards. Everyone wants to be in 1st place, all the time. Or to see themselves moving up in rankings. By providing a ranking system, we increase engagement, and drive people towards making in-app purchases. Those just shouldn't be done to directly get a leg up. It's a slippery slope to allow a player to jump 30 people in front of them to get to #1,000 in the rankings only to see those people do an in-app purchase and create an addiction to the in-app purchases in order to maintain their position in the rankings. It's better to make smaller amounts and keep players around than have them hate a developer once they're realized the game was making money off addiction. Sounds a bit like 

Don’t pit weak players against strong players unnecessarily. 

In Clash of Clans a player builds a village. As they build more cool stuff in the village, the village levels up. The player can buy rubies to complete buildings faster, and so you can basically buy the village levels. But, since a player can basically buy levels, the levels can exceed the players skill. Therefore, in order to pit matched players in battles, a second metric was introduced to match battles that is based on won/lost ratios of battles. By ensuring that players of similar skill duel one another, the skill of players is more likely to progress organically and therefore they remain engaged with the game. The one exception to this rule that I’ve seen actually work well so far has been in Pokémon Go where a player needs to be physically close to a gym rather than just close to the gym while sitting in their living room playing on a console. That geographical alignment really changes this dynamic, as does the great way that gym matches heavily favor attackers, driving fast turnover in gyms and keeping the game accessible to lower level players.

Add time-based incentives. 

If a player logs into a game every day, they should get a special incentive for the day that amplifies the more days they log in in a row. Or if they don’t log in, another player can steal all the stuff. Players get a push alert when another player attacks them. There are a number of different ways to incentivize players to keep logging into an app. The more we keep players in an app, the more likely they are to make a purchase. Until they get so many alerts that they delete your app. Don’t do that.

Incentivize pure gameplay. 

 It might seem counter-intuitive to incentivize players to not use in-app purchases. But not allowing for a perfect score on an in-app purchase (e.g. not allowing for a perfect level in Angry Birds if you used an in-app purchase) will drive more engagement in a game, while likely still allowing for an in-app purchase and then a late-game strategy of finding perfection to unlock that hidden extra level, or whatever the secret sauce is for your game.

Apply maximum purchasing amounts.

Games can get addictive for players. We want dolphins, not whales. This is to say that we want people to spend what they would have spent on a boxed game, say $50, or even that per month. But when players get into spending thousands per day, they're likely to at some point realize their error in judgement and contact Apple or Google for a refund. And they should get one. Don't take advantage of people. 

Make random returns on microtransactions transparent.

There has been talk of regulating randomized loot boxes. Why? Because the numbers don't add up. Rampant abuse of in-app purchases for random gear means that developers who publish the algorithm or source code for how those rewards are derived  will have a certain level of non-repudiation when the law suits start. Again, if those rewards can be earned during the game as well (maybe at a lower likelihood) then we're not abusing game mechanics. 


The above list might seem manipulative at times. Especially to those who don't write code for a living. And to some degree it is. But it can be done ethically and when it is the long-term returns are greater. If nothing else, these laws are a code of ethics of sorts. 

These are lessons that hundreds of companies are out there learning by trial and error, and hopefully documenting them can help emergent companies not have to repeat some of the same mistakes of others. 

We could probably get up to 100 of these (with examples) if we wanted to! What laws have you noticed?



Months before the first node of ARPANET went online, the intrepid easy engineers were just starting to discuss the technical underpinnings of what would evolve into the Internet some day. Here, we hear how hosts would communicate to the IMPs, or early routing devices (although maybe more like a Paleolithic version of what's in a standard network interface today).

It's nerdy. There's discussion of packets and what bits might do what and later Vint Cerf and Bob Kahn would redo most of this early work as the protocols evolved towards TCP/IP. But reading their technical notes and being able to trace those through thousands of RFCs that show the evolution into the Internet we know today is an amazing look into the history of computing. 

The Spread of Science And Culture From The Stone Age to the Bronze Age


Humanity realized we could do more with stone tools some two and a half million years ago. We made stone hammers and cutting implements made by flaking stone, sharpening deer bone, and sticks, sometimes sharpened into spears. It took 750,000 years, but we figured out we could attach those to sticks to make hand axes and other cutting tools about 1.75 million years ago. Humanity had discovered the first of six simple machines, the wedge. 

During this period we also learned to harness fire. Because fire frightened off animals that liked to cart humans off in the night the population increased, we began to cook food, and the mortality rate increased. 

More humans. We learned to build rafts and began to cross larger bodies of water. We spread. Out of Africa, into the Levant, up into modern Germany, France, into Asia, Spain, and up to the British isles by 700,000 years ago. And these humanoid ancestors traded. Food, shell beads, bone tools, even arrows. 

By 380,000-250,000 years ago we got the first anatomically modern humans. The oldest of those remains has been found in modern day Morocco in Northern Africa. We also have evidence of that spread from the African Rift to Turkey in Western Asia to the Horn of Africa in Ethiopia, Eritraea, across the Red Sea and then down into Israel, South Africa, the Sudan, the UAE, Oman, into China, Indonesia, and the Philopenes. 

200,000 years ago we had cored stone on spears, awls, and in the late Stone Age saw the emergence of craftsmanship and cultural identity. This might be cave paintings or art made of stone. We got clothing around 170,000 years ago, when the area of the Sahara Desert was still fertile ground and as people migrated out of there we got the first structures of sandstone blocks at the border of Egypt and modern Sudan. As societies grew, we started to decorate, first with seashell beads around 80,000, with the final wave of humans leaving Africa just in time for the Toba Volcano supereruption to devastate human populations 75,000 years ago. 

And still we persisted, with cave art arriving 70,000 years ago. And our populations grew. 

Around 50,000 years ago we got the first carved art and the first baby boom. We began to bury our dead and so got the first religions. In the millennia that followed we settled in Australia, Europe, Japan, Siberia, the Arctic Circle, and even into the Americas. This time period was known as the Great Leap Forward and we got microliths, or small geometric blades shaped into different forms. This is when the oldest settlements have been found from Egypt, the Italian peninsula, up to Germany, Great Britain, out to Romania, Russia, Tibet, and France. We got needles and deep sea fishing. Tuna sashimi anyone?

By 40,000 years ago the neanderthals went extinct and modern humans were left to forge our destiny in the world. The first aboriginal Australians settled the areas we now call Sydney and Melbourne. We started to domesticate dogs and create more intricate figurines, often of a Venus. We made ivory beads, and even flutes of bone. We slowly spread. Nomadic peoples, looking for good hunting and gathering spots. In the Pavolv Hills in the modern Czech Republic they started weaving and firing figurines from clay. We began to cremate our dead. Cultures like the Kebaran spread, to just south of Haifa. But as those tribes grew, there was strength in numbers. 

The Bhimbetka rock shelters began in the heart of modern-day India, with nearly 800 shelters spread across 8 square miles from 30,000 years ago to well into the Bronze Age. Here, we see elephants, deer, hunters, arrows, battles with swords, and even horses. A snapshot into the lives of of generation after generation. Other cave systems have been found throughout the world including Belum in India but also Germany, France, and most other areas humans settled. As we found good places to settle, we learned that we could do more than forage and hunt for our food. 

Our needs became more complex. Over those next ten thousand years we built ovens and began using fibers, twisting some into rope, making clothing out of others, and fishing with nets. We got our first semi-permanent settlements, such as Dolce Vestonice in the modern day Czech Republic, where they had a kiln that could be used to fire clay, such as the Venus statue found there - and a wolf bone possibly used as a counting stick. The people there had woven cloth, a boundary made of mammoth bones, useful to keep animals out - and a communal bonfire in the center of the village.

A similar settlement in modern Siberia shows a 24,000 year old village. Except the homes were a bit more subterranean. 

Most parts of the world began to cultivate agriculture between 20,000 and 15,000 years ago according to location. During this period we solved the age old problem of food supplies, which introduced new needs. And so we saw the beginnings of pottery and textiles. Many of the cultures for the next 15,000 years are now often referred to based on the types of pottery they would make.

These cultures settled close to the water, surrounding seas or rivers. And we built large burial mounds. Tools from this time have been found throughout Europe, Asia, Africa, and in modern Mumbai in India. Some cultures were starting to become sedentary, such as the Natufian culture we collected grains, started making bread, cultivating cereals like rye, we got more complex socioeconomics, and these villages were growing to support upwards of 150 people. 

The Paleolithic time of living in caves and huts, which began some two and a half million years ago was ending. By 10,000 BCE, Stone Age technology evolved to include axes, chisels, and gouges. This is a time many parts of the world entered the Mesolithic period. The earth was warming and people were building settlements. Some were used between cycles of hunting. As the plants we left in those settlements grew more plentiful, people started to stay there more, some becoming permanent inhabitants. Settlements like in Nanzhuangtou, China. Where we saw dogs and stones used to grind and the cultivation of seed grasses. 

The mesolithic period is when we saw a lot of cave paintings and engraving. And we started to see a division of labor. A greater amount of resources led to further innovation. Some of the inventions would then have been made in multiple times and places again and again until we go them right.  One of those was agriculture. 

The practice of domesticating barley, grains, and wheat began in the millennia leading up to 10,000 BCE and spread up from Northeast Africa and into Western Asia and throughout. There was enough of a surplus that we got the first granary by 9500 BCE. This is roughly the time we saw the first calendar circles emerge. Tracking time would be done first with rocks used to form early megalithic structures. 

Domestication then spread to animals with sheep coming in around the same time, then cattle, all of which could be done in a pastoral or somewhat nomadic lifestyle. Humans then began to domesticate goats and pigs by 8000 BCE, in the Middle East and China. Something else started to appear in the eight millennium BCE: a copper pendant was found in Iraq.

Which brings us to the Neolithic Age. And people were settling along the Indus River, forming larger complexes such as Mehrgarh, also from 7000 BCE. The first known dentistry dates back to this time, showing drilled molars. People in the Timna Valley, located in modern Israel also started to mine copper. This led us to the second real crafting specialists after pottery. Metallurgy was born. 

Those specialists sought to improve their works. Potters started using wheels, although we wouldn’t think to use them vertically to pull a cart until somewhere between 6000 BCE and 4000 BCE. Again, there are six simple machines. The next is the wheel and axle. 

Humans were nomadic, or mostly nomadic, up until this point but settlements and those who lived in them were growing. We starting to settle in places like Lake Nasser and along the river banks from there, up the Nile to modern day Egypt. Nomadic people settled into areas along the eastern coast of the Mediterranean and between the Tigris and Euphrates Rivers with Maghzaliyah being another village supporting 150 people. They began to building using packed earth, or clay, for walls and stone for foundations. This is where one of the earliest copper axes has been found. And from those early beginnings, copper and so metallurgy spread for nearly 5,000 years. 

Cultures like the Yangshao culture in modern China first began with slash and burn cultivation, or plant a crop until the soil stops producing and move on. They built rammed earth homes with thatched, or wattle, roofs. They were the first to show dragons in artwork. In short, with our bellies full, we could turn our attention to the crafts and increasing our standard of living. And those discoveries were passed from complex to complex in trade, and then in trade networks. 

Still, people gotta’ eat. Those who hadn’t settled would raid these small villages, if only out of hunger. And so the cultural complexes grew so neolithic people could protect one another. Strength in numbers. Like a force multiplier. 

By 6000 BCE we got predynastic cultures flourishing in Egypt. With the final remnants of the ice age retreating, raiders moved in on the young civilization complexes from the spreading desert in search of food. The area from the Nile Valley in northern Egypt, up the coast of the Mediterranean and into the Tigris and Euphrates is now known as the Fertile Crescent - and given the agriculture and then pottery found there, known as the cradle of civilization. Here, we got farming. We weren’t haphazardly putting crops we liked in the grounds but we started to irrigate and learn to cultivate. 

Generations passed down information about when to plant various crops was handed down. Time was kept by the season and the movement of the stars. People began settling into larger groups in various parts of the world. Small settlements at first. Rice was cultivated in China, along the Yangtze River. This led to the rise of the Beifudi and Peiligang cultures, with the first site at Jaihu with over 45 homes and between 250 and 800 people. Here, we see raised altars, carved pottery, and even ceramics. 

We also saw the rise of the Houli culture in Neolithic China. Similar to other sites from the time, we see hunting, fishing, early rice and millet production and semi-subterranean housing. But we also see cooked rice, jade artifacts, and enough similarities to show technology transfer between Chinese settlements and so trade. Around 5300 BCE we saw them followed by the Beixin culture, netting fish, harvesting hemp seeds, building burial sites away from settlements, burying the dead with tools and weapons. The foods included fruits, chicken and eggs,  and lives began getting longer with more nutritious diets.

Cultures were mingling. Trading. Horses started to be tamed, spreading from around 5000 BCE in Kazakstan. The first use of the third simple machine came around 5000 BCE when the lever was used first, although it wouldn’t truly be understood until Archimedes. 

Polished stone axes emerged in Denmark and England. Suddenly people could clear out larger and larger amounts of forest and settlements could grow. Larger settlements meant more to hunt, gather, or farm food - and more specialists to foster innovation. In todays Southern Iraq this led to the growth of a city called Eridu. 

Eridu was the city of the first Sumerian kings. The bay on the Persian Gulf allowed trading and being situated at the mouth of the Euphrates it was at the heart of the cradle of civilization. The original neolithic Sumerians had been tribal fishers and told stories of kings from before the floods, tens of thousands of years before the era. They were joined by the Samarra culture, which dates back to 5,700 BCE, to the north who brought knowledge of irrigation and nomadic herders coming up from lands we would think of today as the Middle East. The intermixing of skills and strengths allowed the earliest villages to be settled in 5,300 BCE and grow into an urban center we would consider a city today. 

This was the beginning of the Sumerian Empire Going back to 5300, houses had been made of mud bricks and reed. But they would build temples, ziggurats, and grow to cover over 25 acres with over 4,000 people. As the people moved north and gradually merged with other cultural complexes, the civilization grew. 

Uruk grew to over 50,000 people and is the etymological source of the name Iraq. And the population of all those cities and the surrounding areas that became Sumer is said to have grown to over a million people. They carved anthropomorphic furniture. They made jewelry of gold and created crude copper plates. They made music with flutes and stringed instruments, like the lyre. They used saws and drills. They went to war with arrows and spears and daggers. They used tablets for writing, using a system we now call cuneiform. Perhaps they wrote to indicate lunar months as they were the first known people to use 12 29-30 day months. They could sign writings with seals, which they are also credited with. How many months would it be before Abraham of Ur would become the central figure of the Old Testament in the Bible? 

With scale they needed better instruments to keep track of people, stock, and other calculations. The Sumerian abacus - later used by the Egyptians and then the device we know of as an abacus today entered widespread use in the sixth century in the Persian empire. More and more humans were learning larger precision counting and numbering systems. 

They didn’t just irrigate their fields; they built levees to control floodwaters and canals to channel river water into irrigation networks. Because water was so critical to their way of life, the Sumerian city-states would war and so built armies. 

Writing and arithmetic don’t learn themselves. The Sumerians also developed the concept of going to school for twelve years. This allowed someone to be a scribe or writer, which were prestigious as they were as necessary in early civilizations as they are today. 

In the meantime, metallurgy saw gold appear in 4,000 BCE. Silver and lead in 3,000 BCE, and then copper alloys. Eventually with a little tin added to the copper. By 3000 BCE this ushered in the Bronze Age. And the need for different resources to grow a city or empire moved centers of power to where those resources could be found. 

The Mesopotamian region also saw a number of other empires rise and fall. The Akkadians, Babylonians (where Hammurabi would eventually give the first written set of laws), Chaldeans, Assyrians, Hebrews, Phoenicians, and one of the greatest empires in history, the Persians, who came out of villages in Modern Iran that went back past 10,000 BCE to rule much of the known world at the time. The Persians were able to inherit all of the advances of the Sumerians, but also the other cultures of Mesopotamia and those they traded with. One of their trading partners that the Persians conquered later in the life of the empire, was Egypt. 

Long before the Persians and then Alexander conquered Egypt they were a great empire. Wadi Halfa had been inhabited going back 100,000 years ago. Industries, complexes, and cultures came and went. Some would die out but most would merge with other cultures. There is not much archaeological evidence of what happened from 9,000 to 6,000 BCE but around this time many from  the Levant and Fertile Crescent migrated into the area bringing agriculture, pottery, then metallurgy. 

These were the Nabta then Tasian then Badarian then Naqada then Amratian and in around 3500 BCE we got the Gerzean who set the foundation for what we may think of as Ancient Egypt today with a drop in rain and suddenly people moved more quickly from the desert like lands around the Nile into the mincreasingly metropolitan centers. Cities grew and with trade routes between Egypt and Mesopotamia they frequently mimicked the larger culture. 

From 3200 BCE to 3000 BCE we saw irrigation begin in protodynastic Egypt. We saw them importing obsidian from Ethiopia, cedar from Lebanon, and grow. The Canaanites traded with them and often through those types of trading partners, Mesopotamian know-how infused the empire. As did trade with the Nubians to the south, who had pioneered astrological devices. At this point we got Scorpion, Iry-Hor, Ka, Scorpion II, Double Falcon. This represented the confederation of tribes who under Narmer would unite Egypt and he would become the first Pharaoh. They would all be buried in Umm El Qa’ab, along with kings of the first dynasty who went from a confederation to a state to an empire. 

The Egyptians would develop their own written language, using hieroglyphs. They took writing to the next level, using ink on papyrus. They took geometry and mathematics. They invented toothpaste. They built locked doors. They took the calendar to the next level as well, giving us 364 day years and three seasons. They’d of added a fourth if they’d of ever visited Minnesota, don’tchaknow. And many of those Obelisks raided by the Romans and then everyone else that occupied Egypt - those were often used as sun clocks. They drank wine, which is traced in its earliest form to China. 

Imhotep was arguably one of the first great engineers and philosophers. Not only was he the architect of the first pyramid, but he supposedly wrote a number of great wisdom texts, was a high priest of Ra, and acted as a physician. And for his work in the 27th century BCE, he was made a deity, one of the few outside of the royal family of Egypt to receive such an honor. 

Egyptians used a screw cut of wood around 2500 BCE, the fourth simple machine. They used it to press olives and make wine.  They used the fifth to build pyramids, the inclined plane. And they helped bring us the last of the simple machines, the pulley. And those pyramids. Where the Mesopotamians built Ziggurats, the Egyptians built more than 130 pyramids from 2700 BCE to 1700 BCE. And the Great Pyramid of Giza would remain the largest building in the world for 3,800 years. It is built out of 2.3 million blocks, some of which weigh as much as 80 tonnes. Can you imagine 100,000 people building a grave for you? 

The sundial emerged in 1,500 BCE, presumably in Egypt - and so while humans had always had limited lifespans, our lives could then be divided up into increments of time. 

The Chinese cultural complexes grew as well. Technology and evolving social structures allowed the first recorded unification of all those neolithic peoples when You the Great and his father brought flood control, That family, as the Pharos had, claimed direct heritage to the gods, in this case, the Yellow Emperor. The Xia Dynasty began in China in 2070 BCE. They would flourish until 1600 BCE when they were overthrown by the Shang who lasted until 1046 when they were overthrown by the Zhou - the last ancient Chinese dynasty before Imperial China. 

Greek civilizations began to grow as well. Minoan civilization from 1600 to 1400 BCE grew to house up to 80,000 people in Knossos. Crete is a large island a little less than half way from Greece to Egypt. There are sites throughout the islands south of Greece that show a strong Aegean and Anatolian Cycladic culture emerging from 4,000 BCE but given the location, Crete became the seat of the Minoans, first an agricultural community and then merchants, facilitating trade with Egypt and throughout the Mediterranean. The population went from less than 2,000 people in 2500 BCE to up to 100,000 in 1600 BCE. They were one of the first to be able to import knowledge, in the form of papyrus from Egypt.

The Mycenaeans in mainland Greece, along with earthquakes that destroyed a number of the buildings on Crete, contributed to the fall of the Minoan civilization and alongside the Hittites, Assyrians, Egyptians, and Babylonians, we got the rise of the first mainland European empire: Mycenaean Greece. Sparta would rise, Athens, Corinth, Thebes. After conquering Troy in the Trojan War the empire went into decline with the Bronze Age collapse. We can read about the war in the Iliad and the return home in the Odyssey, written by Homer nearly 400 years later. 

The Bronze Age ended in around 1,200 BCE - as various early empires outgrew the ability to rule ancient metropolises and lands effectively, as climate change forced increasingly urbanized centers to de-urbanize, as the source of tin dried up, and as smaller empires banded together to attack larger empires. Many of these empires became dependent on trade. Trade spread ideas and technology and science. But tribalism and warfare disrupted trade routes and fractured societies. We had to get better at re-using copper to build new things. The fall of cultures caused refugees, as we see today. It’s likely a conflagration of changing cultures and what we now call Sea People caused the collapse. These Sea People include refugees, foreign warlords, and mercenaries used by existing empires. These could have been the former Philistines, Minoans, warriors coming down from the Black Sea, the Italians, people escaping a famine on the Anatolian peninsula, the Mycenaeans as they fled the Dorian invasion, Sardinians, Sicilians, or even Hittites after the fall of that empire. The likely story is a little bit of each of these. But the Neo-Assyrians were weakened in order to take Mesopotamia and then the Neo-Babylonians were. And finally the Persian Empire would ultimately be the biggest winners.

But at the end of the Bronze Age, we had all the components for the birth of the Iron Age. Humans had writing, were formally educating our young, we’d codified laws, we mined, we had metallurgy, we tamed nature with animal husbandry, we developed dense agriculture, we architected, we warred, we destroyed, we rebuilt, we healed, and we began to explain the universe. We started to harness multiple of the six simple machines to do something more in the world. We had epics that taught the next generation to identify places in the stars and pass on important knowledge to the next generation. 

And precision was becoming more important. Like being able to predict an eclipse. This led Chaldean astronomers to establish Saros, a period of 223 synodic months to predict the eclipse cycle. And instead of humans computing those times, within just a few hundred years, Archimedes would document the use of and begin putting math behind many of the six simple devices so we could take interdisciplinary approaches to leveraging compound and complex machines to build devices like the Antikythera mechanism. We were computing.  We also see that precision in the way buildings were created. 

After the collapse of the Bronze Age there would be a time of strife. Warfare, famines, disrupted trade. The great works of the Pharaohs, Mycenaeans and other world powers of the time would be put on hold until a new world order started to form. As those empires grew, the impacts would be lasting and the reach would be greater than ever. 

We’ll add a link to the episode that looks at these, taking us from the Bronze Age to antiquity. But humanity slowly woke up to proto-technology. And certain aspects of our lives have been inherited over so many generations from then. 

The Printing Press


The written word allowed us to preserve human knowledge, or data, from generation to generation. We know only what we can observe from ancient remains from before writing, but we know more and more about societies as generations of people literate enough to document their stories spread. And the more documented, the more knowledge to easily find and build upon, thus a more rapid amount of innovation available to each generation...

The Sumerians established the first written language in the third millennium BCE. They carved data on clay. Written languages spread and by the 26th century BCE the Diary of Merer was written to document building the Great Pyramid of Giza. They started with papyrus, made from the papyrus plant. They would extract the pulp and make thin sheets from it. The sheets of papyrus ranged in color and how smooth the surface was. But papyrus doesn’t grow everywhere. 

People had painted on pots and other surfaces and ended up writing on leather at about the same time. Over time, it is only natural that they moved on to use parchment, or stretched and dried goat, cow, and sheep skins, to write on. Vellum is another material we developed to write on, similar, but made from calfskin. The Assyrians and Babylonians started to write on vellum in the 6th century BCE. 

The Egyptians wrote what we might consider data that was effectively included into pictograms we now call hieroglyphs on papyrus and parchment with ink. For example, per the Unicode Standard 13.0 my cat would be the hieroglyph 130E0. But digital representations of characters wouldn’t come for a long time. It was still carved in stone or laid out in ink back then. 

Ink was developed by the Chinese thousands of years ago, possibly first by mixing soot from a fire and various minerals. It’s easy to imagine early neolithic peoples stepping in a fire pit after it had cooled and  realizing they could use first their hands to smear it on cave walls and then a stick and then a brush to apply it to other surfaces, like pottery. By the time the Egyptians were writing with ink, they were using iron and ocher for pigments. 

India ink was introduced in the second century in China. They used it to write on bamboo, wooden tablets, and even bones. It was used in India in the fourth century BCE and used burned bits of bone, powders made of patroleum called carbon black, and pigments with hide glue then ground and dried. This allowed someone writing to dip a wet brush into the mixture in order to use it to write. And these were used up through the Greek and then Roman times.

More innovative chemical compounds would be used over time. We added lead, pine soot, vegetable oils, animal oils, mineral oils, and while the Silk Road is best known for bringing silks to the west, Chinese ink was the best and another of the luxuries transported across it, well into the 17th century. 

Ink wasn’t all the Silk Road brought. Paper was first introduced in the first century in China. During the Islamic Golden Age, the islamic world expanded the use in the 8th century, and adding the science to build larger mills to make pulp and paper. Paper then made it to Europe in the 11th century.

So ink and paper laid the foundation for the mass duplication of data. But how to duplicate? 

We passed knowledge down verbally for tens of thousands of years. Was it accurate with each telling? Maybe. And then we preserved our stories in a written form for a couple thousand years in a one to one capacity. The written word was done manually, one scroll or book at a time. And so they were expensive. But a family could keep them from generation to generation and they were accurate across the generations.

Knowledge passed down in written form and many a manuscript was copied ornately, with beautiful pictures drawn on the page. But in China they were again innovating. Woodblock printing goes back at least to the second century to print designs on cloth. But had grown to include books by the seventh century. The Diamond Sutra was a Tang Dynasty book from 868 that may be the first printed book, using wood blocks that had been carved in reverse. 

And moveable type came along in 1040, from Bi Sheng in China. He carved letters into clay. Wang Chen in China then printed a text on farming practices called Nung Shu in 1297 and added a number of innovations to the Chinese presses. And missionaries and trade missions from Europe to China likely brought reports home, including copies of the books.

Intaglio printing emerged where lines were cut, etched, or engraved into metal plates, dipped into ink and then pressed onto paper. Similar tactics had been used by goldsmiths for some time. 

But then a goldsmith named Johannes Gutenberg began to experiment using similar ideas just adding the concept of moveable type. He used different alloys to get the letter pressing just right - including antimony, lead, and tin. He created a matrix to mold new type blocks, which we now refer to as a hand mould. He experimented with different kinds of oil and water-based inks. And vellum and paper.  

And so Gutenberg would get credit for inventing the printing press in 1440. This took the basic concept of the screw press, which the Romans introduced in the first century to press olives and wine and added moveable type with lettering made of metal. He was at it for a few years. Just one problem, he needed to raise capital in order to start printing at a larger scale. So he went Johann Fust and took out a loan for 800 guilders. He printed a few projects and then thought he should start printing Bibles. So he took out another loan from Fust for 800 more guilders to print what we now call the Gutenberg Bible and printed indulgences from the church as well. 

By 1455 he’d printed 180 copies of the Bible and seemed on the brink of finally making a profit. But the loan from Fust at 6% interest had grown to over 2,000 guilders and once Fust’s son-in-law was about to run the press, he sued Gutenberg, ending up with Gutenberg’s workshop and all of the Bibles basically bankrupting Gutenberg by 1460. He would die in 1468. 

The Mainz Psalter was commissioned by the Mainz archbishop in 1457 and Fust along with Peter Schöffer, a Gutenberg assistant, would use the press to become the first book to be printed with the mark of the printer. They would continue to print books and Schöffer added putting dates in books, colored ink, type-founding, punch cutting, and other innovations. And Schöffer’s sons would carry on the art, as did his grandson. 

As word spread of the innovation, Italians started printing presses by 1470. German printers went to the Sorbonne and by 1476 they set up companies to print. Printing showed up in Spain in 1473, England in 1476, and Portugal by 1495. In a single generation, the price of books plummeted and the printed word exploded, with over 20 million works being printed by 1500 and 10 times that by 1600.

Before Gutenberg, a single scribe could spend years copying only a few editions of a book before the printing press and with a press, up to 3,600 pages a day could be printed. The Catholic Church had the market on bibles and facing a cash crunch, Pope Alexander VI threatened to excommunicate printing manuscripts. In two decades, John Calvin and Martin Luther changed the world with their books - and Copernicus followed quickly by other scientists published works, even with threats of miscommunication or the Inquisition. 

As presses grew, new innovative uses also grew. We got the first newspaper in 1605. Literacy rates were going up, people were becoming more educated and science and learning were spreading in ways it had never done before. Freedom to learn became freedom of thought and Christianity became fragmented as other thinkers had other ideas of spirituality. We were ready for the Enlightenment. 

Today we can copy and paste text from one screen to the next on our devices. We can make a copy of a single file and have tens of thousands of ancient or modern works available to us in an instant. In fact, plenty of my books are available to download for free on sites with or without mine or my publisher’s consent. Or we can just do a quick Google search and find most any book we want. And with the ubiquity of literacy we moved from printed paper to disks to online and our content creation has exploded. 90% of the data in the world was created in the past two years. We are producing over 2 quintillion bytes of data daily. Over 4 and a half billion people are connected, What’s crazy is that’s nearly 3 and a half billion people who aren’t online. 

Imagine having nearly double the live streamers on Twitch and dancing videos on TikTok! I have always maintained a large physical library. And while writing many of these episodes and the book it’s only grown. Because some books just aren’t available online, even if you’re willing to pay for them. 

So here’s a parting thought I’d like to leave you with today: history is also full of anomalies or moments when someone got close to a discovery but we would have to wait thousands of years for it to come up again. The Phaistos Disc is a Minoan fired clay tablet from Greece. It was made by stamping Minoan hieroglyphs onto the clay. 

And just like sometimes it seems something may have come before its time, we also like to return to the classics here and there. Up until the digital age, paper was one of the most important industries in the world. Actually, it still is. But this isn’t to say that we haven’t occasionally busted out parchment for uses in manual writing. The Magna Carta and the US Constitution were both written on parchment.

So think about what you see that is before its time, or after. And keep a good relationship with your venture capitalists so they don’t take the printing presses away. 

The Scientific Revolution: Copernicus to Newton


Following the Renaissance, Europe had an explosion of science. The works of the Greeks had been lost during the Dark Ages while civilizations caught up to the technical progress. Or so we were taught in school. Previously, we looked at the contributions during the Golden Age of the Islamic Empires and the Renaissance when that science returned to Europe following the Holy Wars.

The great thinkers from the Renaissance pushed boundaries and opened minds. But the revolution coming after them would change the very way we thought of the world. It was a revolution based in science and empirical thought, lasting from the middle of the 1500s to late in the 1600s. 

There are three main aspects I’d like to focus on in terms of taking all the knowledge of the world from that point and preparing it to give humans enlightenment, what we call the age after the Scientific Revolution. These are new ways of reasoning and thinking, specialization, and rigor. Let’s start with rigor.

My cat jumps on the stove and burns herself. She doesn’t do it again. My dog gets too playful with the cat and gets smacked. Both then avoid doing those things in the future.

Early humans learn that we can forage certain plants and then realize we can take those plants to another place and have them grow. And then we realize they grow best when planted at certain times of the year. And watching the stars can provide guidance on when to do so. This evolved over generations of trial and error. 

Yet we believed those stars revolved around the earth for much of our existence. Even after designing orreries and mapping the heavens, we still hung on to this belief until Copernicus. His 1543 work “On The Revolutions of the Heavenly Spheres” marks the beginning of the Scientific Revolution. Here, he almost heretically claimed that the stars in fact revolved around the sun, as did the Earth. 

This wasn’t exactly new. Aristarchus had theorized this heliocentric model in Ancient Greece. Ptolemy had disagreed in Almagest, where he provided tables to compute location and dates using the stars. Tables that had taken rigor to produce. And that Ptolemaic system came to be taken for granted. It worked fine. 

The difference was, Copernicus had newer technology. He had newer optics, thousands more years of recorded data (some of which was contributed by philosophers during the golden age of Islamic science), the texts of ancient astronomers, and newer ecliptical tables and techniques with which to derive them. 

Copernicus didn’t accept what he was taught but instead looked to prove or disprove it with mathematical rigor. The printing press came along in 1440 and 100 years later, Luther was lambasting the church, Columbus discovered the New World, and the printing press helped disseminate information in a way that was less controllable by governments and religious institutions who at times felt threatened by that information. For example, Outlines of Pyrrhonism from first century Sextus Empiricus was printed in 1562, adding skepticism to the growing European thought. In other words, human computers were becoming more sentient and needed more input. 

We couldn’t trust what the ancients were passing down and the doctrine of the church was outdated. Others began to ask questions. 

Johannes Keppler published Mysterium Cosmographicum in 1596, in defense of Copernicus. He would go on to study math, such as the relationship between math and music, and the relationship between math and the weather. And in 1604 published Astronomiae Pars Optica, where he proposed a new method to measure eclipses of the moon. He would become the imperial mathematician to Emperor Rudolf II, where he could work with other court scholars. He worked on optical theory and wrote Astronomiae Pars Optica, or The Optical Part of Astronomy. He published numerous other works that pushed astronomy, optics, and math forward. His Epitome of Copernican Astronomy would go further than Copernicus, assigning ellipses to the movements of celestial bodies and while it didn’t catch on immediately, his inductive reasoning and the rigor that followed, was enough to have him conversing with Galileo. 

Galileo furthered the work of Copernicus and Kepler. He picked up a telescope in 1609 and in his lifetime saw magnification go from 3 to 30 times. This allowed him to map Jupiter’s moons, proving the orbits of other celestial bodies. He identified sunspots. He observed the strength of motions and developed formulas for inertia and parabolic trajectories. 

We were moving from deductive reasoning, or starting our scientific inquiry with a theory - to inductive reasoning, or creating theories based on observation. Galileos observations expanded our knowledge of Venus, the moon, and the tides. He helped to transform how we thought, despite ending up in an Inquisition over his findings.

The growing quantity and types of systematic experimentation represented a shift in values. Emiricism, observing evidence for yourself, and the review of peers - whether they disagreed or not. These methods were being taught in growing schools but also in salons and coffee houses and, as was done in Athens, in paid lectures.

Sir Francis Bacon argued about only basing scientific knowledge on inductive reasoning. We now call this the Baconian Method, which he wrote about in 1620 when he published his book, New method, or Novum Organum in latin. This was the formalization of eliminative induction. He was building on if not replacing the inductive-deductive method  in Aristotle’s Organon. Bacon was the Attorney General of England and actually wrote Novum while sitting as the Lord Chancellor of England, who presides over the House of Lords and also is the highest judge, or was before Tony Blair. 

Bacon’s method built on ancient works from not only Aristotle but also Al-Biruni, al-Haytham, and many others. And has influenced generations of scientists, like John Locke. 

René Descartes helped lay the further framework for rationalism, coining the term “I think therefore I am.” He became by many accounts the father of modern Western Philosophy and asked what can we be certain of, or what is true? This helped him rethink various works and develop Cartesian geometry. Yup, he was the one who developed standard notation in 1637, a thought process that would go on to impact many other great thinkers for generations - especially with the development of calculus. As with many other great natural scientists or natural philosophers of the age, he also wrote on the theory of music, anatomy, and some of his works could be considered a protopsychology. 

Another method that developed in the era was empiricism, which John Locke proposed in An Essay Concerning Human Understanding in 1689. George Berkeley, Thomas Hobbes, and David Hume would join that movement and develop a new basis for human knowledge in that empirical tradition that the only true knowledge accessible to our minds was that based on experience.

Optics and simple machines had been studied and known of since antiquity. But tools that deepened the understating of sciences began to emerge during this time. We got the steam digester, new forms of telescopes, vacuum pumps, the mercury barometer. And, most importantly for this body of work - we got the mechanical calculator. 

Robert Boyle was influenced by Galileo, Bacon, and others. He gave us Boyle’s Law, explaining how the pressure of gas increases as the volume of a contain holding the gas decreases. He built air pumps. He investigated how freezing water expands, he experimented with crystals. He experimented with magnetism, early forms of electricity. He published the Skeptical Chymist in 1660 and another couple of dozen books. Before him, we had alchemy and after him, we had chemistry.

One of his students was Robert Hooke. Hooke. Hooke defined the law of elasticity, He experimented with everything. He made music tones from brass cogs that had teeth cut in specific proportions. This is storing data on a disk, in a way. Hooke coined the term cell. He studied gravitation in Micrographia, published in 1665. 

And Hooke argued, conversed, and exchanged letters at great length with Sir Isaac Newton, one of the greatest scientific minds of all time. He gave the first theory on the speed of sound, Newtonian mechanics, the binomials series. He also gave us Newton’s Rules for Science which are as follows:

  1. We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
  2. Therefore to the same natural effects we must, as far as possible, assign the same causes.
  3. The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
  4. In experimental philosophy we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, until such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions

These appeared in Principia, which gave us the laws of motion and a mathematical description of gravity leading to universal gravitation. Newton never did find the secret to the Philosopher’s Stone while working on it, although he did become the Master of the Royal Mint at a pivotal time of recoining, and so who knows. But he developed the first reflecting telescope and made observations about prisms that led to his book Optics in 1704. And ever since he and Leibniz developed calculus, high school and college students alike have despised him. 

Leibniz also did a lot of work on calculus but was a great philosopher as well. His work on logic 

  1. All our ideas are compounded from a very small number of simple ideas, which form the alphabet of human thought.
  2. Complex ideas proceed from these simple ideas by a uniform and symmetrical combination, analogous to arithmetical multiplication.

This would ultimately lead to the algebra of concepts and after a century and a half of great mathematicians and logicians would result in Boolean algebra, the zero and one foundations of computing, once Claude Shannon gave us information theory a century after that. 

Blaise Pascal was another of these philosopher mathematician physicists who also happened to dabble in inventing. I saved him for last because he didn’t just do work on probability theory, do important early work on vacuums, give us Pascal’s Triangle for binomial coefficients, and invent the hydraulic press. Nope. He also developed Pascal’s Calculator, an early mechanical calculator that is the first known to have worked. He didn’t build it to do much, just help with the tax collecting work he was doing for his family. 

The device could easily add and subtract two numbers and then loop through those tasks in order to do rudimentary multiplication and division. He would only build about 50, but the Pascaline as it came to be known was an important step in the history of computing. And that Leibniz guy, he invented the Leibniz wheels to make the multiplication automatic rather than just looping through addition steps. It wouldn’t be until 1851 that the Arithmometer made a real commercial go at mechanical calculators in a larger and more business like way. While Tomas, the inventor of that device is best known for his work on the calculator today, his real legacy is the 1,000 families who get their income from the insurance company he founded, which is still in business as GAN Assurances, and the countless families who have worked there or used their services. 

That brings us to the next point about specializations. Since the Egyptians and Greeks we’ve known that the more specialists we had in fields, the more discoveries they made. Many of these were philosophers or scientists. They studied the stars and optics and motions and mathematics and geometry for thousands of years, and an increasingly large amount of information was available to generations that followed starting with the written words first being committed to clay tablets in Mesopotamia.

The body of knowledge had grown to the point where one could study a branch of science, such as mathematics, physics, astronomy, biology, and chemistry for their entire lives - improving each field in their own way. Every few generations, this transformed societal views about nature. We also increased our study of anatomy, with an increase in or return to the dissection of human corpses, emerging from the time when that was not allowed.

And these specialties began to diverge into their own fields in the next generations. There was certainly still collaboration, and in fact the new discoveries only helped to make science more popular than ever.

Given the increased popularity, there was more work done, more theories to prove or disprove, more scholarly writings, which were then given to more and more people through innovations to the printing press, and a more and more literate people. Seventeenth century scientists and philosophers were able to collaborate with members of the mathematical and astronomical communities to effect advances in all fields.

All of this rapid change in science since the end of the Renaissance created a groundswell of interest in new ways to learn about findings and who was doing what. There was a Republic of Letters, or a community of intellectuals spread across Europe and America. These informal networks sprang up and spread information that might have been considered heretical before transmitted through secret societies of intellectuals and through encrypted letters. And they fostered friendships, like in the early days of computer science. 

There were groups meeting in coffee houses and salons. The Royal Society of London sprang up in 1600. Then the British Royal Society was founded in 1660. They started a publication called Philosophical Transactions in 1665. There are over 8,000 members of the society, which runs to this day with fellows of the society including people like Robert Hooke and fellows would include Newton, Darwin, Faraday, Einstein, Francis Crick, Turing, Tim Berners-Lee, Elon Musk, and Stephen Hawking. And this inspired Colbert to establish the French Academy of Sciences in 1666.

They swapped papers, read one another’s works, and that peer review would evolve into the journals and institutions we have today. There are so many more than the ones mentioned in this episode. Great thinkers like Otto von Guericke, Otto Brunfels, Giordano Bruno, Leonard Fuchs, Tycho Brahe, Samuel Hartlib, William Harvey, Marcello Malpighi, John Napier, Edme Mariotte, Santorio Santorio, Simon Stevin, Franciscus Sylvius, John Baptist van Helmont, Andreas Vesalius, Evangelista Torricelli, Francois Viete, John Wallis, and the list goes on. 

Now that scientific communities were finally beyond where the Greeks had left off like with Plato’s Academy and the letters sent by ancient Greeks. The scientific societies had emerged similarly, centuries later. But the empires had more people and resources and traditions of science to build on. 

This massive jump in learning then prepared us for a period we now call the Enlightenment, which then opened minds and humanity was ready to accept a new level of Science in the Age of Enlightenment. The books, essays, society periodicals, universities, discoveries, and inventions are often lost in the classroom where the focus can be about the wars and revolutions they often inspired. But those who emerged in the Scientific Revolution acted as guides for the Enlightenment philosophers, scientists, engineers, and thinkers that would come next. But we’ll have to pick that back up in the next episode!

The First Analog Computer: The Antikythera Device


Sponges are some 8,000 species of animals that grow in the sea that lack tissues and organs. Fossil records go back over 500 million years and they are found throughout the world. Two types of sponges are soft and can be used to hold water that can then be squeezed out or used to clean. Homer wrote about using Sponges as far back as the 7th century BCE, in the Odyssey. Hephaestus cleaned his hands with one - much as you and I do today. 

Aristotle, Plato, the Romans, even Jesus Christ all discussed cleaning with sponges. And many likely came from places like the Greek island of Kalymnos, where people have harvested and cultivated sponges in the ocean since that time. They would sail boats with glass bottoms looking for sponges and then dive into the water, long before humans discovered diving equipment, carrying a weight, cut the sponge and toss it into a net. Great divers could stay on the floor of the sea for up to 5 minutes. 

Some 2,600 years after Homer, diving for sponges was still very much alive and well in the area. The people of Kalymnos have been ruled by various Greek city states, the Roman Empire, the Byzantines, Venetians, and still in 1900, the Ottomans. Archaeologist Charles Newton had excavated a Temple of Apollo on the island in the 1850s just before he’d then gone to Turkey to excavate one of the Seven Wonders of the Ancient World: The Mausoleum of Halicarnassus, built by Mausolus - such a grand tomb that we still call buildings that are tombs mausoleums in his honor, to this day.

But 1900 was the dawn of a new age. Kalymnos had grown to nearly 1,000 souls. Proud of their Greek heritage, the people of the island didn’t really care what world power claimed their lands. They carved out a life in the sea, grew food and citrus, drank well, made head scarfs, and despite the waning Ottomon rule, practiced Orthodox Christianity. 

The sponges were still harvested from the floor of the sea rather than made from synthetic petroleum products. Captain Dimitrios Kontos and his team of sponge divers are sailing home from a successful run to the Symi island, just as they’d done for thousands of years, when someone spots something. They were off the coast of Antikythera, another Greek island that has been inhabited since the 4th or 5th millennia BCE, which had been a base for Cilician pirates from the 4th to 1st centuries BCE and at the time the southern most point in Greece.

They dove down and after hearing stories from the previous archaeological expedition, knew they were on to something. Something old. They brought back a few smaller artifacts like a bronze arm - as proof of their find, noting the seabed was littered with statues that looked like corpses. 

They recorded the location and returned home. They went to the Greek government in Athens, thinking they might get a reward for the find, where Professor Ikonomu took them to meet with the Minister of Education, Spyriodon, Stais. He offered to have his divers bring up the treasure in exchange for pay equal to the value of the plunder and the Greek government sent a ship to help winch up the treasures. 

They brought up bronze and marble statues, and pottery. When they realized the haul was bigger than they thought, the navy sent a second ship. They used diving suits, just as those were emerging technology. One diver died. The ship turned out to be over 50 meters and the wreckage strewn across 300 meters. 

The shipwreck happened somewhere between 80 and 50 BCE. It was carrying cargo from Asia Minor probably to Rome, sank not by pirates, which had just recently been cleared from the area but likely crashed during a storm. There are older shipwrecks, such as the Dokos from around 2200 BCE and just 60 miles east of Sparta, but few have given up as precious of cargo. We still don’t know how the ship came to be where it was but there is speculation that it was sailing from Rhodes to Rome, for a parade marking victories of Julius Caesar.

Everything brought up went on to live at the National Museum of Archaeology in Athens. There were fascinating treasures to be cataloged and so it isn’t surprising that between the bronze statues, the huge marble statues of horses, glassware, and other Greek treasures that a small corroded bronze lump in a wooden box would go unloved. That is, until archaeologist Valerios Stais noticed a gear wheel in it. 

He thought it must belong to an ancient clock, but that was far too complex for the Greeks. Or was it? It is well documented that Archimedes had been developing the use of gearwheels. And Hero of Alexandria had supposedly developed a number of great mechanical devices while at the Library of Alexandria. 

Kalymnos was taken by Italians in the Italo-Turkish War in 1912. World War I came and went. After the war, the Ottoman Empire fell and with Turkish nationalists taking control, they went to war with Greece. The Ottoman Turks killed between 750,000 and 900,000 Greeks. The Second Hellenic Republic came and went. World War II came and went. And Kylamnos was finally returned to Greece from Italy. With so much unrest, archeology wasn’t on nearly as many minds. 

But after the end of World War II, a British historian of science who was teaching at Yale at the time, took interest in the device. His name was Derek de Solla Price. In her book, Decoding the Heavens, Jo Marchant takes us through a hundred year journey where scientists and archaeologists use the most modern technology available to them at the time to document the device and publish theories as to what it could have been used for. This began with drawings and moved into X-ray technology becoming better and more precise with each generation. And this mirrors other sciences. We make observations, or theories, as to the nature of the universe only to be proven right or wrong when the technology of the next generation uncovers more clues. It’s a great book and a great look at the history of archaeology available in different stages of the 19th century. 

She tells of times before World War II, when John Svoronos and Adolf Wilhelm uncovered the first inscriptions and when Pericles Redials was certain the device was a navigational instrument used to sail the ship. She tells of Theophanidis publishing a theory it might be driven by a water clock in 1934. She weaves in Jeaques Cousteau and Maria Savvatianou and Gladys Weinberg and Peter Throckmorton and Price and Wang Ling and Arthur C. Clarke and nuclear physicist Charalambos Karakolos and Judith Field and Michael Wright and Allan Bromley and Alan Crawley and Mike Edmunds and Tony Freeth and Nastulus, a tenth century astronomer in Baghdad.

Reverse engineering the 37 gears took a long time. I mean, figuring up the number of teeth per gear, how they intersected, what drove them, and then trying to figure out why this prime number or what calendar cycle this other thing might have represented. Because the orbit isn’t exactly perfect and the earth is tilted and all kinds of stuff. Each person unraveled their own piece and it’s a fantastic journey through history and discovery. 

So read the book and we’ll skip to what exactly the Antikypthera Device was. Some thought it an astrolabe, which had begun use around 200 BCE - and which measured the altitude of the sun or stars to help sailors navigate the seas. Not quite. Some theorized it was a clock, but not the kind we use to tell time today. More to measure aspects of the celestial bodies than minutes. 

After generations of scientists studied it, most of the secrets of the device are now known. We know it was an orrery - a mechanical model of the solar system. It was an analog computer, driven by a crank, and predicted the positions of various celestial bodies and when eclipses would occur many, many decades in advance - and on a 19 year cycle that was borrowed from cultures far older than the Greeks. The device would have had some kind of indicator, like gems or glass orbs that moved around representing the movements of Jupiter, Mars, Mercury, Saturn, and Venus. It showed the movements of the sun and moon, representing the 365 days of the year as a solar calendar and the 19-year lunar cycle inherited from the Babylonians - and those were plotted relative to the zodiac, or 12 constellations. It forecast eclipses and the color of each eclipse. And phases of the moon.

Oh and for good measure it also tracked when the Olympic Games were held. 

About that one more thing with calculating the Olympiad - One aspect of the device that I love, and most clockwork devices in fact, is the analogy that can be made to a modern micro service architecture in software design. Think of a wheel in clockwork. Then think of each wheel being a small service or class of code. That triggers the next and so-on. The difference being any service could call any other and wouldn’t need a shaft or the teeth of only one other wheel to interact - or even differential gearing. 

Reading the story of decoding the device, it almost feels like trying to decode someone else’s code across all those services. 

I happen to believe that most of the stories of gods are true. Just exaggerated a bit. There probably was a person named Odin with a son named Thor or a battle of the Ten Kings in India. I don’t believe any of them were supernatural but that over time their legends grew. Those legends often start to include that which the science of a period cannot explain. The more that science explains, the less of those legends, or gods, that we need.

And here’s the thing. I don’t think something like this just appears out of nowhere. It’s not the kind of thing a lone actor builds in a workshop in Rhodes. It’s the kind of device that evolves over time. One great crafter adds another idea and another philosopher influences another. There could have been a dozen or two dozen that evolved over time, the others lost to history. Maybe melted down to forge bronze weapons, hiding in a private collection, or sitting in a shipwreck or temple elsewhere, waiting to be discovered. 

The Greek philosopher Thales was said to have built a golden orb. Hipparchus of Rhodes was a great astronomer. The Antikythera device was likely built between 200 and 100 BC, when he would have been alive. Was he consulted on during the creation, or involved? Between Thales and Hipparchus, we got Archimedes, Euclid, Pythagoras, Aristotle, Philo, Ctesibius, and so many others. Their books would be in the Library of Alexandria for anyone to read. You could learn of the increasingly complicated Ctesibius water clocks along with their alarms or the geometry of Euclid or the inventions of Philo. Or you could read about how Archimedes continued that work and added a chime. 

We can assign the device to any of them - or its’ heritage. And we can assume that as with legends of the gods, it was an evolution of science, mathematics, and engineering. And that the science and technology wasn’t lost, as has been argued, but instead moved around as great thinkers moved around. Just as the water clock had been in use since nearly 4000 BCE in modern day India and China and become increasingly complicated over time until the Greeks called them clepsydra and anaphoric clocks. Yet replacing water with gears wasn’t considered for awhile. Just as it took Boolean algebra and flip-flop circuits to bring us into the age of binary and then digital computing. 

The power of these analog computers could have allowed for simple mathematic devices, like deriving angles or fractions when building. But given that people gotta’ eat and ancient calculation devices and maps of the heavens helped guide when to plant crops, that was first in the maslovian hierarchy of technological determinism. 

So until our next episode consider this: what technology is lying dormant at the bottom of the sea in your closet. Buried under silt but waiting to be dug up by intrepid divers and put into use again, in a new form. What is the next generation of technical innovation for each of the specialties you see? Maybe it helps people plant crops more effectively, this time using digital imagery to plot where to place a seed. Or maybe it’s to help people zero in on information that matters or open trouble tickets more effectively or share their discoveries or claim them or who knows - but I look forward to finding out what you come up with and hopefully some day telling the origin story of that innovation!

The Evolution and Spread of Science and Philosophy from the Classical Age to the Age of Science


The Roman Empire grew. Philosophy and the practical applications derived from great thinkers were no longer just to impress peers or mystify the commoners into passivity but to help humans do more. The focus on practical applications was clear. This isn’t to say there weren’t great Romans. We got Seneca, Pliny the Elder, Plutarch, Tacitus, Lucretius, Plotinus, Marcus Aurelius, one of my favorite Hypatia, and as Christianity spread we got the Cristian Philosophers in Rome such as Saint Augustine. 

The Romans reached into new lands and those lands reached back, with attacks coming by the Goths, Germanic tribes, Vandals, and finally resulting in the sack of Rome. They had been weakened by an overreliance on slaves, overspending on military to fuel the constant expansion, government corruption due to a lack of control given the sheer size of the empire, and the need to outsource the military due to the fact that Roman citizens needed to run the empire. Rome would split in 285 and by the fourth century fell.

Again, as empires fall new ones emerge. As the Classical Period ended in each area with the decline of the Roman Empire, we were plunged into the Middle Ages, which I was taught was the Dark Ages in school. But they weren’t dark. Byzantine, the Eastern Roman Empire survived. The Franks founded Francia in northern Gaul. The Celtic Britons emerged. The Visigoths setup shop in Northern Spain. The Lombards in Northern Italy. The Slavs spread through Central and Eastern Europe and the Latin language splintered into the Romance languages. 

And that spread involved Christianity, whose doctrine often classed with the ancient philosophies. And great thinkers weren’t valued. Or so it seemed when I was taught about the Dark Ages. But words matter. The Prophet Muhammad was born in this period and Islamic doctrine spread rapidly throughout the Middle East. He united the tribes of Medina and established a Constitution in the sixth century. After years of war with Mecca, he later seized the land. He then went on to conquer the Arabian Peninsula, up into the lands of the Byzantines and Persians. With the tribes of Arabia united, Muslims would conquer the last remains of Byzantine Egypt, Syria, Mesopotamia and take large areas of Persia. 

This rapid expansion, as it had with the Greeks and Romans, led to new trade routes, and new ideas finding their way to the emerging Islamic empire. In the beginning they destroyed pagan idols but over time adapted Greek and Roman technology and thinking into their culture. They Brough maps, medicine, calculations, and agricultural implants. They learned paper making from the Chinese and built paper mills allowing for an explosion in books. Muslim scholars in Baghdad, often referred to as New Babylon given that it’s only 60 miles away. They began translating some of the most important works from Greek and Latin and Islamic teachings encouraged the pursuit of knowledge at the time. Many a great work from the Greeks and Romans is preserved because of those translations. 

And as with each empire before them, the Islamic philosophers and engineers built on the learning of the past. They used astrolabes in navigation, chemistry in ceramics and dyes, researched acids and alkalis. They brought knowledge from Pythagoras and Babylonians and studied lines and spaces and geometry and trigonometry, integrating them into art and architecture. Because Islamic law forbade dissections, they used the Greek texts to study medicine.  

The technology and ideas of their predecessors helped them retain control throughout the Islamic Golden Age. The various Islamic empires spread East into China, down the African coast, into Russia, into parts of Greece, and even North into Spain where they ruled for 800 years. Some grew to control over 10 million square miles. They built fantastic clockworks, documented by al-Jazari in the waning days of the golden age. And the writings included references to influences in Greece and Rome, including the Book of Optics by Ibn Al-Haytham in the ninth century, which is heavily influenced by Ptolemy’s book, Optics. But over time, empires weaken. 

Throughout the Middle Ages, monarchs began to be deposed by rising merchant classes, or oligarchs. What the framers of the US Constitution sought to block with the way the government is structured. You can see this in the way the House of Lords had such power in England even after the move to a constitutional monarchy. And after the fall of the Soviet Union, Russia has moved more and more towards a rule by oligarchs first under Yeltsin and then under Putin. Because you see, we continue to re-learn the lessons learned by the Greeks. But differently. Kinda’ like bell bottoms are different than all the other times they were cool each time they come back. 

The names of European empires began to resemble what we know today: Wales, England, Scotland, Italy, Croatia, Serbia, Sweden, Denmark, Portugal, Germany, and France were becoming dominant forces again. The Catholic Church was again on the rise as Rome practiced a new form of conquering the world. Two main religions were coming more and more in conflict for souls: Christianity and Islam.

And so began the Crusades of the High Middle Ages. Crusaders brought home trophies. Many were books and scientific instruments. And then came the Great Famine followed quickly by the Black Death, which spread along with trade and science and knowledge along the Silk Road. Climate change and disease might sound familiar today. France and England went to war for a hundred years. Disruption in the global order again allows for new empires. Ghengis Khan built a horde of Mongols that over the next few generations spread through China, Korea, India, Georgia and the Caucasus, Russia, Central Asia and Persia, Hungary, Lithuania, Bulgaria, Vietnam, Baghdad, Syria, Poland, and even Thrace throughout the 11th to 13th centuries. Many great works were lost in the wars, although the Mongols often allowed their subjects to continue life as before, with a hefty tax of course. They would grow to control 24 million square kilometers before the empires became unmanageable. 

This disruption caused various peoples to move and one was a Turkic tribe fleeing Central Asia that under Osman I in the 13th century. The Ottomon empire he founded would go Islamic and grow to include much of the former Islamic regime as they expanded out of Turkey, including Greece Northern Africa. Over time they would also invade and rule Greece and almost all the way north to Kiev, and south through the lands of the former Mesopotamian empires. While they didn’t conquer the Arabian peninsula, ruled by other Islamic empires, they did conquer all the way to Basra in the South and took Damascus, Medina, and Mecca, and Jerusalem. Still, given the density of population in some cities they couldn’t grow past the same amount of space controlled in the days of Alexander. But again, knowledge was transferred to and from Egypt, Greece, and the former Mesopotamian lands. And with each turnover to a new empire more of the great works were taken from these cradles of civilization but kept alive to evolve further. 

And one way science and math and philosophy and the understanding of the universe evolved was to influence the coming Renaissance, which began in the late 13th century and spread along with Greek scholars fleeing the Ottoman Turks after the fall of Constantinople throughout the Italian city-states and into England, France, Germany, Poland, Russia, and Spain. Hellenism was on the move again. The works of Aristotle, Ptolemy, Plato, and others heavily influenced the next wave of mathematicians, astronomers, philosophers, and scientists. Copernicus studied Aristotle. Leonardo Da Vinci gave us the Mona Lisa, the Last Supper, the Vitruvian Man, Salvator Mundi, and Virgin of the Rocks. His works are amongst the most recognizable paintings of the Renaissance. But he was also a great inventor, sketching and perhaps building automata, parachutes, helicopters, tanks, and along the way putting optics, anatomy, hydrodynamics and engineering concepts in his notebooks. And his influences certainly included the Greeks and Romans, including the Roman physician Galen. Given that his notebooks weren’t published they offer a snapshot in time rather than a heavy impact on the evolution of science - although his influence is often seen as a contribution to the scientific revolution. 

Da Vinci, like many of his peers in the Renaissance, learned the great works of the Greeks and Romans. And they learned the teachings in the Bible. They they didn’t just take the word of either and they studied nature directly. The next couple of generations of intellectuals included Galileo. Galileo, effectively as with Socrates and countless other thinkers that bucked the prevailing political or religious climate of the time, by writing down what he saw with his own eyeballs. He picked up where Copernicus left off and discovered the four moons of Jupiter and astronomers continued to espouse that the the sun revolved around the Earth Galileo continued to prove it was in fact suspended in space and map out the movement of the heavenly bodies. 

Clockwork, which had been used in the Greek times, as proven with the Antikypthera device and mentions of Archytas’s dove. Mo Zi and Lu Ban built flying birds. As the Greeks and then Romans fell, that automata as with philosophy and ideas moved to the Islamic world. The ability to build a gear with a number of teeth to perform a function had been building over time. As had ingenious ways to put rods and axles and attach differential gearing. Yi Xing, a Buddhist monk in the Tang Dynasty, would develop the escapement, along with Liang Lingzan in the seventeenths century and the practice spread through China and then spread from there. But now clockwork would get pendulums, springs, and Robert Hook would give us the escapement in 1700, making clocks accurate. And that brings us to the scientific revolution, when most of the stories in the history of computing really start to take shape.

Thanks to great thinkers, philosophers, scientists, artists, engineers, and yes, merchants who could fund innovation and spread progress through formal and informal ties - the age of science is when too much began happening too rapidly to really be able to speak about it meaningfully. The great mathematics and engineering led to industrialization and further branches of knowledge and specializations - eventually including Boolean algebra and armed with thousands of years of slow and steady growth in mechanics and theory and optics and precision, we would get early mechanical computing beginning the much more quick migration out of the Industrial and into the Information Age. These explosions in technology allowed the British Empire to grow to control 34 million square kilometers of territory and the Russian empire to grow to control 17 million before each overextended.

Since writing was developed, humanity has experienced a generation to generation passing of the torch of science, mathematics, and philosophy. From before the Bronze Age, ideas were sometimes independently perceived or sometimes spread through trade from the Chinese, Indian, Mesopotamian, and Egyptian civilizations (and others) through traders like the Phoenicians to the Greeks and Persians - then from the Greeks to the Romans and the Islamic empires during the dark ages then back to Europe during the Renaissance. And some of that went both ways. 

Ultimately, who first introduced each innovation and who influenced whom cannot be pinpointed in a lot of cases. Greeks were often given more credit than they deserved because I think most of us have really fond memories of toga parties in college. But there were generations of people studying all the things and thinking through each field when their other Maslovian needs were met - and those evolving thoughts and philosophies were often attributed to one person rather than all the parties involved in the findings. 

After World War II there was a Cold War - and one of the ways that manifested itself was a race to recruit the best scientists from the losing factions of that war, namely Nazi scientists. Some died while trying to be taken to a better new life, as Archimedes had died when the Romans tried to make him an asset. For better or worse, world powers know they need the scientists if they’re gonna’ science - and that you gotta’ science to stay in power. When the masses start to doubt science, they’re probably gonna’ burn the Library of Alexandria, poison Socrates, exile Galileo for proving the planets revolve around Suns and have their own moons that revolve around them, rather than the stars all revolving around the Earth. There wasn’t necessarily a dark age - but given what the Greeks and Romans and Chinese thinkers knew and the substantial slowdown in those in between periods of great learning, the Renaissance and Enlightenment could have actually come much sooner. Think about that next time you hear people denying science. 

To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio’s Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato’s Symposium and The Trial & Execution of Socrates.

For a running list of all books used in this podcast see the GitHub page at 

The Evolution and Spread of Science and Philosophy from the Bronze Age to The Classical Age


Science in antiquity was at times devised to be useful and at other times to prove to the people that the gods looked favorably on the ruling class. Greek philosophers tell us a lot about how the ancient world developed. Or at least, they tell us a Western history of antiquity. Humanity began working with bronze some 7,000 years ago and the Bronze Age came in force in the centuries leading up to 3,000 BCE.

By then there were city-states and empires. The Mesopotamians brought us the wheel in around 3500 BCE, and the chariot by 3200 BCE. Writing formed in Sumeria, a city state of Mesopotamia, in 3000 BCE. Urbanization required larger cities and walls to keep out invaders. King Gilgamesh built huge walls. They used a base 60 system to track time, giving us the 60 seconds and 60 minutes to get to an hour. That sexagesimal system also gave us the 360 degrees in a circle. They plowed fields and sailed. And sailing led to maps, which they had by 2300 BCE. And they gave us the Epic, with the Epic of Gilgamesh which could be old as 2100 BCE. At this point, the Egyptian empire had grown to 150,000 square kilometers and the Sumerians controlled around 20,000 square kilometers.

Throughout, they grew a great trading empire. They traded with China, India and Egypt with some routes dating back to the fourth millennia BCE. And commerce and trade means the spread of not only goods but also ideas and knowledge. The earliest known writing of complete sentences in Egypt came to Egypt a few hundred years after it did in Mesopotamia, as the Early Dynastic period ended and the Old Kingdom, or the Age of the Pyramids. Perhaps over a trade route. 

The ancient Egyptians used numerals, multiplications, fractions, geometry, architecture, algebra, and even quadratic equations. Even having a documented base 10 numbering system on a tomb from 3200 BCE. We also have the Moscow Mathematical Papyrus, which includes geometry problems, the Egyptian Mathematical Leather Roll, which covers how to add fractions, the Berlin Papyrus with geometry, the Lahun Papyri with arithmetical progressions to calculate the volume of granaries, the Akhmim tablets, the Reisner Papyrus, and the Rhind Mathematical Papyrus, which covers algebra and geometry. And there’s the Cairo Calendar, an ancient Egyptian papyrus from around 1200 BCE with detailed astronomical observations. Because the Nile flooded, bringing critical crops to Egypt.

The Mesopotamians traded with China as well. As the Shang dynasty from the 16th to 11th centuries BCE gave way to the Zhou Dynasty, which went from the 11th to 3rd centuries BCE and the Bronze Age gave way to the Iron Age, science was spreading throughout the world. The I Ching is one of the oldest Chinese works showing math, dating back to the Zhou Dynasty, possibly as old as 1000 BCE. This was also when the Hundred Schools of Thought began, which Conscious inherited around the 5th century BCE. Along the way the Chinese gave us the sundial, abacus, and crossbow. And again, the Bronze Age signaled trade empires that were spreading ideas and texts from the Near East to Asia to Europe and Africa and back again. For a couple thousand years the transfer of spices, textiles and precious metals fueled the Bronze Age empires. 

Along the way the Minoan civilization in modern Greece had been slowly rising out of the Cycladic culture. Minoan artifacts have been found in Canaanite palaces and as they grew they colonized and traded. They began a decline around 1500 BCE, likely due to a combination of raiders and volcanic eruptions. The crash of the Minoan civilization gave way to the Myceneaen civilization of early Greece. 

Competition for resources and land in these growing empires helped to trigger wars. 

Those in turn caused violence over those resources. Around 1250 BCE, Thebes burned and attacks against city states cities increased, sometimes by emerging empires of previously disassociated tribes (as would happen later with the Vikings) and sometimes by other city-states.  This triggered the collapse of Mycenaen Greece, the splintering of the Hittites, the fall of Troy, the absorption of the Sumerian culture into Babylon, and attacks that weakened the Egyptian New Kingdom. Weakened and disintegrating empires leave room for new players. The Iranian tribes emerged to form the Median empire in today’s Iran. The Assyrians and Scythians rose to power and the world moved into the Iron age. And the Greeks fell into the Greek Dark Ages until they slowly clawed their way out of it in the 8th century BCE. Around this time Babylonian astronomers, in the capital of Mesopomania, were making astronomical diaries, some of which are now stored in the British Museum. 

Greek and Mesopotamian societies weren’t the only ones flourishing. The Indus Valley Civilization had blossomed from 2500 to 1800 BCE only to go into a dark age of its own. Boasting 5 million people across 1,500 cities, with some of the larger cities reaching 40,000 people - about the same size as Mesopotamian cities. About two thirds are in modern day India and a third in modern Pakistan, an empire that stretched across 120,000 square kilometers. As the Babylonian control of the Mesopotamian city states broke up, the Assyrians began their own campaigns and conquered Persia, parts of Ancient Greece, down to Ethiopia, Israel, the Ethiopia, and Babylon. As their empire grew, they followed into the Indus Valley, which Mesopotamians had been trading with for centuries. 

What we think of as modern Pakistan and India is where Medhatithi Gautama founded the anviksiki school of logic in the 6th century BCE. And so the modern sciences of philosophy and logic were born. As mentioned, we’d had math in the Bronze Age. The Egyptians couldn’t have built pyramids and mapped the stars without it. Hammurabi and Nebuchadnezzar couldn’t have built the Mesopotamian cities and walls and laws without it. But something new was coming as the Bronze Age began to give way to the Iron Age. The Indians brought us the first origin of logic, which would morph into an almost Boolean logic as Pāṇini codified Sanskrit grammar linguistics and syntax. Almost like a nearly 4,000 verse manual on programming languages. Panini even mentions Greeks in his writings. Because they apparently had contact going back to the sixth century BCE, when Greek philosophy was about to get started.

The Neo-Assyrian empire grew to 1.4 million square kilometers of control and the Achaeminid empire grew to control nearly 5 million square miles. 

The Phoenicians arose out of the crash of the Late Bronze Age, becoming important traders between the former Mesopotamian city states and Egyptians. As her people settled lands and Greek city states colonized lands, one became the Greek philosopher Thales, who documented the use of loadstones going back to 600 BCE when they were able to use magnetite which gets its name from the Magnesia region of Thessaly, Greece. He is known as the first philosopher and in the time of Socrates even had become one of the Seven Sages which included according to Socrates. “Thales of Miletus, and Pittacus of Mytilene, and Bias of Priene, and our own Solon, and Cleobulus of Lindus, and Myson of Chenae, and the seventh of them was said to be Chilon of Sparta.” 

Many of the fifth and sixth century Greek philosophers were actually born in colonies on the western coast of what is now Turkey. Thales’s theorum is said to have originated in India or Babylon. But as we see a lot in the times that followed, it is credited to Thales. Given the trading empires they were all a part of though, they certainly could have brought these ideas back from previous generations of unnamed thinkers. I like to think of him as the synthesizers that Daniel Pink refers to so often in his book A Whole New Mind. 

Thales studied in Babylon and Egypt, bringing thoughts, ideas, and perhaps intermingled them with those coming in from other areas as the Greeks settled colonies in other lands. Given how critical astrology was to the agricultural societies, this meant bringing astronomy, math to help with the architecture of the Pharoes, new ways to use calendars, likely adopted through the Sumerians, coinage through trade with the Lydians and then Persians when they conquered the Lydians, Babylon, and the Median.

So Thales taught Anaximander who taught Pythagoras of Samos, born a few decades later in 570 BCE. He studied in Egypt as well. Most of us would know the Pythagorean theorem which he’s credited for, although there is evidence that predated him from Egypt. Whether new to the emerging Greek world or new to the world writ large, his contributions were far beyond that, though. They included a new student oriented way of life, numerology, the idea that the world is round, numerology, applying math to music and applying music to lifestyle, and an entire school of philosophers emerged from his teachings to spread Pythagoreanism. And the generations of philosophers that followed devised both important philosophical contributions and practical applications of new ideas in engineering.

The ensuing schools of philosophy that rose out of those early Greeks spread. By 508 BCE, the Greeks gave us Democracy. And oligarchy, defined as a government where a small group of people have control over a country. Many of these words, in fact, come from Greek forms. As does the month of May, names for symbols and theories in much of the math we use, and many a constellation. That tradition began with the sages but grew, being spread by trade, by need, and by religious houses seeking to use engineering as a form of subjugation. 

Philosophy wasn’t exclusive to the Greeks or Indians, or to Assyria and then Persia through conquering the lands and establishing trade. Buddha came out of modern India in the 5th to 4th century BCE around the same time Confucianism was born from Confucious in China. And Mohism from Mo Di. Again, trade and the spread of ideas. However, there’s no indication that they knew of each other or that Confucious could have competed with the other 100 schools of thought alive and thriving in China. Nor that Buddhism would begin spreading out of the region for awhile. But some cultures were spreading rapidly.

The spread of Greek philosophy reached a zenith in Athens. Thales’ pupil Anaximander also taught Anaximenes, the third philosopher of the Milesian school which is often included with the Ionians. The thing I love about those three, beginning with Thales is that they were able to evolve the school of thought without rejecting the philosophies before them. Because ultimately they knew they were simply devising theories as yet to be proven. Another Ionian was Anaxagoras, who after serving in the Persian army, which ultimately conquered Ionia in 547 BCE. As a Greek citizen living in what was then Persia, Anaxagoras moved to Athens in 480 BCE, teaching Archelaus and either directly or indirectly through him Socrates. This provides a link, albeit not a direct link, from the philosophy and science of the Phoenicians, Babylonians, and Egyptians through Thales and others, to Socrates.  

Socrates was born in 470 BCE and mentions several influences including Anaxagoras. Socrates spawned a level of intellectualism that would go on to have as large an impact on what we now call Western philosophy as anyone in the world ever has. And given that we have no writings from him, we have to take the word of his students to know his works. He gave us the Socratic method and his own spin on satire, which ultimately got him executed for effectively being critical of the ruling elite in Athens and for calling democracy into question, corrupting young Athenian students in the process. 

You see, in his life, the Athenians lost the Peloponnesian War to Sparta - and as societies often do when they hit a speed bump, they started to listen to those who call intellectuals or scientists into question. That would be Socrates for questioning Democracy, and many an Athenian for using Socrates as a scape goat. 

One student of Socrates, Critias, would go on to lead a group called the Thirty Tyrants, who would terrorize Athenians and take over the government for awhile. They would establish an oligarchy and appoint their own ruling class. As with many coups against democracy over the millennia they were ultimately found corrupt and removed from power. But the end of that democratic experiment in Greece was coming.

Socrates also taught other great philosophers, including Xenophon, Antisthenes, Aristippus, and Alcibiades. But the greatest of his pupils was Plato. Plato was as much a scientist as a philosopher. He had works of Pythagoras, studied the Libyan Theodorus. He codified a theory of Ideas, in Forms. He used as examples, the Pythagorean theorem and geometry. He wrote a lot of the dialogues with Socrates and codified ethics, and wrote of a working, protective, and governing class, looking to produce philosopher kings. He wrote about the dialectic, using questions, reasoning and intuition. He wrote of art and poetry and epistemology. His impact was vast. He would teach mathemetics to Eudoxus, who in turn taught Euclid. But one of his greatest contributions the evolution of philosophy, science, and technology was in teaching Aristotle. 

Aristotle was born in 384 BCE and founded a school of philosophy called the Lyceum. He wrote about rhetoric, music, poetry, and theater - as one would expect given the connection to Socrates, but also expanded far past Plato, getting into physics, biology, and metaphysics. But he had a direct impact on the world at the time with his writings on economics politics, 

He inherited a confluence of great achievements, describing motion, defining the five elements, writing about a camera obscure and researching optics. He wrote about astronomy and geology, observing both theory and fact, such as ways to predict volcanic eruptions. He made observations that would be proven (or sometimes disproven) such as with modern genomics. He began a classification of living things. His work “On the Soul” is one of the earliest looks at psychology. His study of ethics wasn’t as theoretical as Socrates’ but practical, teaching virtue and how that leads to wisdom to become a greater thinker. 

He wrote of economics. He writes of taxes, managing cities, and property. And this is where he’s speaking almost directly to one of his most impressive students, Alexander the Great. Philip the second of Macedon hired Plato to tutor Alexander starting in 343. Nine years later, when Alexander inherited his throne, he was armed with arguably the best education in the world combined with one of the best trained armies in history. This allowed him to defeat Darius in 334 BCE, the first of 10 years worth of campaigns that finally gave him control in 323 BCE.

In that time, he conquered Egypt, which had been under Persian rule on and off and founded Alexandria. And so what the Egyptians had given to Greece had come home. Alexander died in 323 BCE. He followed the path set out by philosophers before him. Like Thales, he visited Babylon and Egypt. But he went a step further and conquered them. This gave the Greeks more ancient texts to learn from but also more people who could become philosophers and more people with time to think through problems. 

By the time he was done, the Greeks controlled nearly 5 million square miles of territory. This would be the largest empire until after the Romans. But Alexander never truly ruled. He conquered. Some of his generals and other Greek aristocrats, now referred to as the Diadochi, split up the young, new empire. You see, while teaching Alexander, Aristotle had taught two other future kings : Ptolemy I Soter and Cassander. 

Cassander would rule Macedonia and Ptolemy ruled Egypt from Alexandria, who with other Greek philosophers founded the Library of Alexandria. Ptolemy and his son amassed 100s of thousands of scrolls in the Library from 331 BC and on. The Library was part of a great campus of the Musaeum where they also supported great minds starting with Ptolemy I’s patronage of Euclid, the father of geometry, and later including Archimedes, the father of engineering, Hipparchus, the founder of trigonometry, Her, the father of math, and Herophilus, who codified the scientific method and countless other great hellenistic thinkers. 

The Roman Empire had begin in the 6th century BCE. By the third century BCE they were expanding out of the Italian peninsula. This was the end of Greek expansion and as Rome conquered the Greek colonies signified the waning of Greek philosophy. Philosophy that helped build Rome both from a period of colonization and then spreading Democracy to the young republic with the kings, or rex, being elected by the senate and by 509 BCE the rise of the consuls. 

After studying at the Library of Alexandria, Archimedes returned home to start his great works, full of ideas having been exposed to so many works. He did rudimentary calculus, proved geometrical theories, approximated pi, explained levers, founded statics and hydrostatics. And his work extended into the practical. He built machines, pulleys, the infamous Archimedes’ screw pump, and supposedly even a deathly heat ray of lenses that could burn ships in seconds. He was sadly killed by Roman soldiers when Syracuse was taken. But, and this is indicative of how Romans pulled in Greek know-how, the Roman general Marcus Claudius Marcellus was angry that he lost an asset, who could have benefited his war campaigns. In fact, Cicero, who was born in the first century BCE mentioned Archimedes built mechanical devices that could show the motions of the planetary bodies. He claimed Thales had designed these and that Marcellus had taken one as his only personal loot from Syracuse and donated it to the Temple of Virtue in Rome. 

The math, astronomy, and physics that go into building a machine like that was the culmination of hundreds, if not thousands of years of building knowledge of the Cosmos, machinery, mathematics, and philosophy. Machines like that would have been the first known computers. Machines like the first or second century Antikythera mechanism, discovered in 1902 in a shipwreck in Greece. Initially thought to be a one-off, the device is more likely to represent the culmination of generations of great thinkers and doers. Generations that came to look to the Library of Alexandria as almost a Mecca. Until they didn’t. 

The splintering of the lands Alexander conquered, the cost of the campaigns, the attacks from other empires, and the rise of the Roman Empire ended the age of Greek Enlightenment. As is often the case when there is political turmoil and those seeking power hate being challenged by the intellectuals, as had happened with Socrates and philosophers in Athens at the time, Ptolemy VIII caused The Library of Alexandria to enter into a slow decline that began with the expulsion of intellectuals from Alexandria in 145BC. This began a slow decline of the library until it burned, first with a small fire accidentally set by Caesar in 48 BCE and then for good in the 270s. 

But before the great library was gone for good, it would produce even more great engineers. Heron of Alexandria is one of the greatest. He created vending machines that would dispense holy water when you dropped a coin in it. He made small mechanical archers, models of dancers, and even a statue of a horse that could supposedly drink water. He gave us early steam engines two thousand years before the industrial revolution and ran experiments in optics. He gave us Heron’s forumula and an entire book on mechanics, codifying the known works on automation at the time. In fact, he designed a programmable cart using strings wrapped around an axle, powered by falling weights. 

Claudius Ptolemy came to the empire from their holdings in Egypt, living in the first century. He wrote about harmonics, math, astronomy, computed the distance of the sun to the earth and also computed positions of the planets and eclipses, summarizing them into more simplistic tables. He revolutionized map making and the properties of light.

By then, Romans had emerged as the first true world power and so the Classical Age.

To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio’s Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato’s Symposium and The Trial & Execution of Socrates.

From Antiquity to Bitcoin: A Brief History of Currency, Banking, and Finance


Today we’re going to have a foundational episode, laying the framework for further episodes on digital piracy, venture capital, accelerators, Bitcoin, PayPal, Square, and others. I’ll try to keep from dense macro and micro economics but instead just lay out some important times from antiquity to the modern financial system so we can not repeat all this in those episodes. I apologize to professionals in these fields whose life work I am about to butcher in oversimplification. 

Like a lot of nerds who found myself sitting behind a keyboard writing code, I read a lot of science fiction growing up. There are dystopian and utopian outlooks on what the future holds for humanity give us a peak into what progress is. Dystopian interpretations tell of what amount to warlords and a fragmentation of humanity back to what things were like thousands of years ago. The utopian interpretations often revolve around questions about how society will react to social justice, or a market in equilibrium.

The dystopian science fiction represents the past of economics and currency. And the move to online finances and digital currency tracks against what science fiction told us was coming in a future more utopian world. My own mental model of economics began with classes on micro and macro economics in college but evolved when I was living in Verona, Italy.

We visited several places built by a family called the Medici’s. I’d had bank accounts up until then but that’s the first time I realized how powerful banking and finance as an institution was. Tombs, villas, palaces. The Medici built lasting edifices to the power of their clan. They didn’t invent money, but they made enough to be on par with the richest modern families. 

It’s easy to imagine humans from the times of hunter-gatherers trading an arrowhead for a chunk of meat. As humanity moved to agriculture and farming, we began to use grain and cattle as currency. By 8000 BC people began using tokens for trade in the Middle East. And metal objects came to be traded as money around 5,000 BC.

And around 3,000 PC we started to document trade. Where there’s money and trade, there will be abuse. By 1,700 BC early Mesopotamian even issued early regulations for the banking industry in the Code of Hammurabi. By then private institutions were springing up to handle credit, deposits, interest, and loans. Some of which was handled on clay tablets. 

And that term private is important. These banking institutions were private endeavors. As the Egyptian empire rose, farmers could store grain in warehouses and then during the Ptolemeic era began to trade the receipts of those deposits. We can still think of these as tokens and barter items though. Banking had begun around 2000 BC in Assyria and Sumeria but these were private institutions effectively setting their own splintered and sometimes international markets. Gold was being used but it had to be measured and weighed each time a transaction was made. 

Until the Lydian Stater. Lydia was an empire that began in 1200 BC and was conquered by the Persians around 546 BC. It covered the modern Western Anatolia, Salihli, Manisa, and Turkey before the Persians took it. One of their most important contributions to the modern world was the first state sponsored coinage, in 700BC. The coins were electrum, which is a mix of gold and silver. 

And here’s the most important part. The standard weight was guaranteed by an official stamp. The Lydian king Croesus then added the concept of bimetallic coinage. Or having one coin made of gold and the other of silver. Each had a different denomination where the lower denomination was one dozen of the higher. They then figured out a way to keep counterfeit coins off the market with a Lydian stone, the color of which could be compared to other marks made by gold coins. And thus modern coinage was born. And the Lydian merchants became the merchants that helped move goods between Greece and Asia, spreading the concept of the coin. Cyrus the second defeated the Lydians and Darius the Great would issue the gold daric, with a warrior king wielding a bow. And so heads of state adorned coins. 

As with most things in antiquity, there are claims that China or India introduced coins first. Bronzed shells have been discovered in the ruins of Yin, the old capital of the Shang dynasty dating back hundreds of years before the Lydians. But if we go there this episode will be 8 hours long. 

Exodus 22:25-27 “If you lend money to my people—to any poor person among you—never act like a moneylender. Charge no interest.”

Let’s put that bible verse in context. So we have coins and banks. And international trade. It’s mostly based on the weight of the coins. Commerce rises and over the centuries banks got so big they couldn’t be allowed to fail without crashing the economy of an empire. Julius Caeser expands the empire of Rome and gold flows in from conquered lands. One thing that seems constant through history is that interest rates from legitimate lenders tend to range from 3 to 14 percent. Anything less and you are losing money. Anything more and you’ve penalized the borrower to the point they can’t repay the loan. The more scarce capital the more you have to charge. Like the US in the 80s. So old Julius meets an untimely fate, there are wars, and Augustus manages to solidify the empire and Augustus reformed taxes and introduced a lot of new services to the state, building roads, establishing a standing army, the Praetorian Guard, official fire fighting and police and established a lot of the old Roman road systems through the empire that Rome is now known so well for. It was an over 40 year reign and one of the greatest in history. But greatness is expensive. 

Tiberius had to bail out banks and companies in the year 33. Moneylending sucks when too many people can’t pay you back. Augustus had solidified the Roman Empire and by the time Tiberius came around Rome was a rich import destination. Money was being leant abroad and interest rates and so there was less and less gold in the city. Interest rates had plummeted to 4 percent. Again, we’re in a time when money is based on the weight of a coin and there simply weren’t enough coins in circulation due to the reach of the empire. And so for all my Libertarian friends - empires learned the hard way that business and commerce are essential services and must be regulated. If money cannot be borrowed then crime explodes. People cannot be left to starve. Especially when we don’t all live on land that can produce food any more. 

Any time the common people are left behind, there is a revolt. The more the disparity the greater the revolt. The early Christians were heavily impacted by the money lending practices in that era between Julius Caeser and Tiberius and the Bible as an economic textbook is littered with references to usury, showing the blame placed on emerging financial markets for the plight of the commoner. Progress often involves two steps forward and one back to let all of the people in a culture reap the rewards of innovations.  

The Roman Empire continued on gloriously for a long, long time. Over time, Rome fell. Other empires came and went. As they did, they minted coins to prove how important the ruling faction was. It’s easy to imagine a farmer in the dark ages following the collapse of the Roman Empire dying and leaving half of the farm to each of two children. Effectively each owns one share. That stock can then be used as debt and during the rise of the French empire, 12th century courretiers de change found they could regulate debts as brokers. The practice grew. 

Bankers work with money all day. They get crafty and think of new ways to generate income. The Venetians were trading government securities and in 1351 outlawed spreading rumors to lower the prices of those - and thus market manipulation was born. By 1409 Flemish traders began to broker the trading of debts in Bruges at an actual market. Italian companies began issuing shares and joint stock companies were born allowing for colonization of the American extensions to European powers. That colonization increased the gold supply in Europe five fold, resulting in the first great gold rush. 

European markets, flush with cash and speculation and investments, grew and by 1611 in Amsterdam the stock market was born. The Dutch East India Company sold shares to the public and brought us options, bonds and derivatives. Dutch perpetual bonds were introduced and one issued in 1629 is still paying dividends. So we got the bond market for raising capital. 

Over the centuries leading to the industrial revolution, banking, finance, and markets became the means with which capitalism and private property replaced totalitarian regimes, the power of monarchs, and the centralized control of production. As the markets rose, modern economics were born, with Adam Smith codifying much of the known works at that point, including those from French physiocrats. The gold standard began around 1696 and gained in popularity. The concept was to allow paper money to be freely convertible into a pre-defined amount of gold. Therefore, paper money could replace gold and still be backed by gold just as it was in antiquity. By 1789 we were running a bit low on gold so introduced the bimetallic standard where silver was worth one fifteenth of gold and a predefined market ratio was set.  

Great thinking in economics goes back to antiquity but since the time of Tiberius, rulers had imposed regulation. This had been in taxes to pay for public goods and bailing out businesses that had to get bailed out - and tariffs to control the movement of goods in and out of a country. To put it simply, if too much gold left the country, interest rates would shoot up, inflation would devalue the ability to buy goods and as people specialized in industries, those who didn’t produce food, like the blacksmiths or cobblers, wouldn’t be able to buy food. And when people can’t buy food, bad things happen. 

Adam Smith believed in self-regulation though, which he codified in his seminal work Wealth of Nations, in 1776. He believed that what he called the “invisible hand” of the market would create economic stability, which would lead to prosperity for everyone. And that became the framework for modern capitalistic endeavors for centuries to come. But not everyone agreed. Economics was growing and there were other great thinkers as well. 

Again, things fall apart when people can’t get access to food and so Thomas Malthus responded with a theory that the rapidly growing populations of the world would outgrow the ability to feed all those humans. Where Smith had focused on the demand for goods, Malthus focused on scarcity of supply. Which led to another economist, Karl Marx, to see the means of production as key to providing the Maslovian hierarchy. He saw capitalism as unstable and believed the creation of an owner (or stock trader) class and a working class was contrary to finding balance in society. He accurately predicted the growing power of business and how that power would control and so hurt the worker at the benefit of the business. We got marginalize, general equilibrium theory, and over time we could actually test theories and the concepts that began with Smith became a science, economics, with that branch known as neoclassical.

Lots of other fun things happen in the world. Bankers begin instigating innovation and progress. Booms or bull markets come, markets over index and/or supplies become scarce and recessions or bear markets ensue. Such is the cycle. To ease the burdens of an increasingly complicated financial world, England officially adopted the gold standard in 1821 which led to the emergence of the international gold standard, adopted by Germany in 1871 and by 1900, most of the world. Gaining in power and influence, the nations of the world stockpiled gold up until World War I in 1914. The international political upheaval led to a loss of faith in the gold standard and the global gold supply began to fall behind the growth in the global economy. 

JP Morgan dominated Wall Street in what we now called the Gilded age. He made money by reorganizing and consolidating railroad businesses throughout America. He wasn’t just the banker, he was the one helping become more efficient, digging into how the businesses worked and reorganizing and merging corporate structures. He then financed Edison’s research and instigated the creation of General Electric. He lost money investing on a Tesla project when Tesla wanted to go wireless. He bought Carnegie Steel in 1901, the first modern buyout that gave us US Steel. The industrialists from the turn of the century increased productivity at a rate humanity had never seen. We had the biggest boom market humanity had ever seen and then when the productivity gains slowed and the profits and earnings masked the slowdown in output a bubble of sorts formed and the market crashed in 1929. 

These markets are about returns on investments. Those require productivity gains as they are usually based margin, or the ability to sell more goods without increasing the cost - thus the need for productivity gains. That crash in 1929 sent panic through Wall Street and wiped out investors around the world. Consumer confidence, and so spending and investment was destroyed. With a sharp reduction needed in supply, industrial output faltered and workers were laid off, creating a vicious cycle. 

The crash also signaled the end of the gold standard. The pound and franc were mismanaged, commodity prices, new power Germany was having trouble repaying war debts, commodity prices collapsed, and thinking a reserve of gold would keep them legitimate, countries raised interest rates, further damaging the global economy. High interest rates reduce investment. England finally suspended the gold standard in 1931 which sparked  other countries to do the same, with the US raising the number of dollars per ounce of gold from $20 to $35 and so obtaining enough gold to back the US dollar as the de facto standard. 

Meanwhile, science was laying the framework for the next huge boom - which would be greater in magnitude, margins, and profits. Enter John Maynard Keynes and Keynesian economics, the rise of macroeconomics. In a departure from neoclassical economics he believed that the world economy had grown to the point that aggregate supply and demand would not find equilibrium without government intervention. In short, the invisible hand would need to be a visible hand by the government. By then, the Bolsheviks had established the Soviet Union and Mao had founded the communist party in China. The idea that there had been a purely capitalist society since the time the Egyptian government built grain silos or since Tiberius had rescued the Roman economy with bailouts was a fallacy. The US and other governments began spending, and incurring debt to do so, and we began to dig the world out of a depression.

But it took another world war to get there. And that war did more than just end the Great Depression. World War II was one of the greatest rebalancing of powers the world has known - arguably even greater than the fall of the Roman and Persian empires and the shift between Chinese dynasties. In short, we implemented a global world order of sorts in order to keep another war like that from happening. Globalism works for some and doesn’t work well for others. It’s easy to look on the global institutions built in that time as problematic. And organizations like the UN and the World Bank should evolve so they do more to lift all people up, so not as many around the world feel left behind. 

The systems of governance changed world economics.The Bretton Woods Agreement would set the framework for global currency markets until 1971. Here, all currencies were valued in relation to the US dollar which based on that crazy rebalancing move now sat on 75% of the worlds gold. The gold was still backed at a rate of $35 per ounce. And the Keynesian International Monetary Fund would begin managing the balance of payments between nations. Today there are 190 countries in the IMF

Just as implementing the gold standard set the framework that allowed the investments that sparked capitalists like JP Morgan, an indirect financial system backed by gold through the dollar allowed for the next wave of investment, innovation, and so productivity gains.

This influx of money and investment meant there was capital to put to work and so bankers and financiers working with money all day derived new and witty instruments with which to do so. After World War II, we got the rise of venture capital. These are a number of financial instruments that have evolved so qualified investors can effectively make bets on a product or idea. Derivatives of venture include incubators and accelerators. 

The best example of the early venture capital deals would be when Ken Olson and Harlan Anderson raised $70,000 in 1957 to usher in the age of transistorized computing. DEC rose to become the second largest computing company - helping revolutionize knowledge work and introduce a new wave of productivity gains and innovation. They went public in 1968 and the investor made over 500 times the investment, receiving $38 million in stock. More importantly, he stayed friends and a confidant of Olson and invested in over 150 other companies. 

The ensuing neoclassical synthesis of economics basically informs us that free markets are mostly good and efficient but if left to just Smith’s invisible hand, from time to time they will threaten society as a whole. Rather than the dark ages, we can continue to evolve by keeping markets moving and so large scale revolts at bay. As Aasimov effectively pointed out in Foundation - this preserves human knowledge. And strengthens economies as we can apply math, statistics, and the rising computers to help apply monetary rather than fiscal policy as Friedman would say, to keep the economy in equilibrium. 

Periods of innovation like we saw in the computer industry in the post-war era always seem to leave the people the innovation displaces behind. When enough people are displaced we return to tribalism, nationalism, thoughts of fragmentation, and moves back into the direction of dystopian futures. Acknowledging people are left behind and finding remedies is better than revolt and retreating from progress - and showing love to your fellow human is just the right thing to do. Not doing so creates recessions like the ups and downs of the market in the years as gaps between innovative periods formed.

The stock market went digital in 1966, allowing more and more trades to be processed every day. Instinet was founded in 1969 allowing brokers to make after hour trades. NASDAQ went online in 1970, removing the floor or trading market that had been around since the 1600s. And as money poured in, ironically gold reserves started to go down a little. Just as the Romans under Tiberius saw money leave the country as investment, US gold was moving to other central banks to help rebuild countries, mostly those allied with NATO, to rebuild their countries. But countries continued to release bank notes to pay to rebuild, creating a period of hyperinflation.

As with other times when gold became scarce, interest rates became unpredictable, moving from 3 to 17 percent and back again until they began to steadily decline in 1980. 

Gold would be removed from the London market in 1968 and other countries began to cash out their US dollars for gold. Belgium, the Netherlands, then Britain cashed in their dollars for gold, and much as had happened under the reign of Tiberius, there wasn’t enough to sustain the financial empires created. This was the turning point for the end of the informal links back to the gold standard. By 1971 Nixon was forced to sever the relationship between the dollar and gold and the US dollar, by then the global standard going back to the Bretton Woods Agreement, became what’s known as fiat money. The Bretton Woods agreement was officially over and the new world order was morphing into something else. Something that was less easily explainable to common people. A system where the value of currency was based not on the link to gold but based on the perception of a country, as stocks were about to move from an era of performance and productivity to something more speculative.

Throughout the 80s more and more orders were processed electronically and by 1996 we were processing online orders. The 2000s saw algorithmic and high frequency trading. By 2001 we could trade in pennies and the rise of machine learning created billionaire hedge fund managers. Although earlier versions were probably more just about speed. Like if EPS is greater than Expected EPS and guidance EPS is greater than EPS then buy real fast, analyze the curve and sell when it tops out. Good for them for making all the moneys but while each company is required to be transparent about their financials, the high frequency trading has gone from rewarding companies with high earnings to seeming like more a social science where the rising and falling was based on confidence about an industry and the management team.

It became harder and harder to explain how financial markets work. Again, bankers work with money all day and come up with all sorts of financial instruments to invest in with their time. The quantity and types of these became harder to explain. Junk bonds, penny stocks, and to an outsider strange derivatives. And so moving to digital trading is only one of the ways the global economy no longer makes sense to many. 

Gold and other precious metals can’t be produced at a rate faster than humans are produced. And so they had to give way to other forms of money and currency, which diluted the relationship between people and a finite, easy to understand, market of goods. 

As we moved to a digital world there were thinkers that saw the future of currency as flowing electronically. Russian cyberneticist Kitov theorized electronic payments and then came ATMs back in the 50s, which the rise of digital devices paved the way to finally manifest themselves over the ensuing decades. Credit cards moved the credit market into more micro-transactional, creating industries where shop-keepers had once kept debits in a more distributed ledger. As the links between financial systems increased and innovators saw the rise of the Internet on the way, more and more devices got linked up.

This combined with the libertarianism shown by many in the next wave of Internet pioneers led people to think of ways for a new digital currency. David Chaum thought up ecash in 1983, to use encrypted keys, much as PGP did for messages, to establish a digital currency. In 1998, Nick Szabo came up with the idea for what he called bitgold, a digital currency based on cryptographic puzzles and the solved puzzles would be sent to a public registry using a public key where the party who solved the puzzle would receive a private key. This was kinda’ like using a mark on a Lydian rock to make sure coins were gold. He didn’t implement the system but had the initial concept that it would work similar to the gold standard - just without a central authority, like the World Bank. 

This was all happening concurrently with the rise of ubiquitous computing, the move away from checking to debit and credit cards, and the continued mirage that clouded what was really happening in the global financial system. There was a rise in online e-commerce with various sites emerging to buy products in a given industry online. Speculation increased creating a bubble around Internet companies. That dot com bubble burst in 2001 and markets briefly retreated from the tech sector. 

Another bull market was born around the rise of Google, Netflix, and others. Productivity gains were up and a lot of money was being put to work in the market, creating another bubble. Markets are cyclical and need to be reigned back in from time to time. That’s not to minimize the potentially devastating impacts to real humans. The Global Financial Crisis of 2008 came along for a number of reasons, mostly tied to the bursting of a housing bubble to oversimplify the matter. The lack of liquidity with banks caused a crash and the lack of regulation caused many to think through the nature of currency and money in an increasingly globalized and digital world. After all, if the governments of the world couldn’t protect the citizenry of the world from seemingly unscrupulous markets then why not have completely deregulated markets where the invisible hand does so?

Which brings us to the rise of cryptocurrencies.

Who is John Galt? Bitcoin was invented by Satoshi Nakamoto, who created the first blockchain database and brought the world into peer-to-peer currency in 2009 when bitcoin .1 was released. Satoshi mined block 0 of bitcoin for 50 bitcoins. Over the next year Satoshi mined a potential of about a million bitcoins. Back then a bitcoin was worth less than a penny. As bitcoin grew and the number of bitcoins mined into the blockchain increased, the scarcity increased and the value skyrocketed reaching over $15 billion as of this writing. Who is Satoshi Nakamoto? No one knows - the name is a pseudonym. Other cryptocurrencies have risen such as Etherium. And the market has largely been allowed to evolve on its own, with regulators and traditional financiers seeing it as a fad. Is it? Only time will tell. 

There is about an estimated 200,000 tonnes of gold in the world worth about 93 trillion dollars if so much of it weren’t stuck in necklaces and teeth buried in the ground. The US sits on the largest stockpile of it today, at 8,000 tonnes worth about a third of a trillion dollars, then Germany, Italy, and France. By contrast there are 18,000,000 bitcoins with a value of about $270 billion, a little less than the US supply of gold. By contrast the global stock market is valued at over $85 trillion.

The global financial markets are vast. They include the currencies of the world and the money markets that trade those. Commodity markets, real estate, the international bond and equity markets, and derivative markets which include contracts, options, and credit swaps. This becomes difficult to conceptualize because as one small example in the world financial markets, over $190 billion is traded on stock markets a day. 

Seemingly, rather than running on gold reserves, markets are increasingly driven by how well they put debt to work. National debts are an example of that. The US National Debt currently stands at over $27 trillion dollars. Much is held by our people as bonds, although some countries hold some as security as well, including governments like Japan and China, who hold about the same amount of debt if you include Hong Kong with China. But what does any of that mean? The US GDP sits at about $22.3 trillion dollars. So we owe a little more than we make in a year. Much as many families with mortgages, credit cards, etc might owe about as much as they make. And roughly 10% of our taxes go to pay interest. Just as we pay interest on mortgages. 

Most of this is transparent. As an example, government debt is often held in the form of a treasury bond. The website lists who holds what bonds: Nearly every market discussed here can be traced to a per-transaction basis, with many transactions being a matter of public record. And yet, there is a common misconception that people think the market is controlled by a small number of people. Like a cabal. But as with most perceived conspiracies, the global financial markets are much more complex. There are thousands of actors who think they are acting rationally who are simply speculating. And there are a few who are committing a crime by violating or inorganically manipulating markets, as has been illegal since the Venetians passed their first laws on the matter. Most day traders will eventually lose all of their money. Most market manipulators will eventually go to jail. But there’s a lot of grey in between. And that can’t entirely be planned for. 

At the beginning of this episode I mentioned it was a prelude to a deeper dive into digital piracy, venture capital, Bitcoin, PayPal, Square, and others. Piracy, because it potentially represents the greatest redistribution of wealth since the beginning of time. Baidu and Alibaba have made their way onto public exchanges. ANT group has the potential to be the largest IPO in history. Huawei is supposedly owned by employees. You can also buy stocks in Russian banking, oil, natural gas, and telecom. 

Does this mean that the split created when the ideas of Marx became a political movement that resulted in communist regimes is over? No. These have the potential of creating a bubble. One that will then need correcting, maybe even based on intellectual property damage claims. The seemingly capitalistic forays made by socialist or communist countries just go to show that there really isn’t and has never been a purely capitalist, socialist, or communist market. Instead, they’re spectrums separated by a couple of percentages of tax here and there to pay for various services or goods to the people that each nation holds as important enough to be universal to whatever degree that tax can provide the service or good. 

So next time you hear “you don’t want to be a socialist country, do you?” Keep in mind that every empire in history has simply been somewhere in a range from a free market to a state-run market. The Egyptians provided silos, the Lydians coined gold, the Romans built roads and bailed out banks, nations adopted gold as currency, then build elaborate frameworks to gain market equilibrium. Along the way markets have been abused and then regulated and then deregulated. The rhetoric used to day though is really a misdirection play handed down by people with ulterior motives. You know, like back in the Venetian times. I immediately think of dystopian futures when I feel I’m being manipulated. That’s what charlatans do. That’s not quite so necessary in a utopian outlook.

How Not To Network A Nation: The Russian Internet That Wasn't


I just finished reading a book by Ben Peters called How Not To Network A Nation: The Uneasy History of the Soviet Internet. The book is an amazing deep dive into the Soviet attempts to build a national information network primarily in the 60s. The book covers a lot of ground and has a lot of characters, although the most recurring is Viktor Glushkov, and if the protagonist isn’t the Russian scientific establishment, perhaps it is Viktor Glushkov. And if there’s a primary theme, it’s looking at why the Soviets were unable to build a data network that covered the Soviet Union, allowing the country to leverage computing at a micro and a macro scale 

The final chapter of the book is one of the best summaries and most insightful I’ve ever read on the history of computers. While he doesn’t directly connect the command and control heterarchy of the former Soviet Union to how many modern companies are run, he does identify a number of ways that the Russian scientists were almost more democratic, or at least in their zeal for a technocratic economy, than the US Military-Industrial-University complex of the 60s.  

The Sources and Bibliography is simply amazing. I wish I had time to read and listen and digest all of the information that went into the making if this amazing book. And the way he cites notes that build to conclusions. Just wow.

In a previous episode, we covered the memo, “Memorandum for Members and Affiliates of the Intergalactic Computer Network” - sent by JCR Licklider in 1963. This was where the US Advanced Research Projects Agency instigated a nationwide network for research. That network, called ARPAnet, would go online in 1969, and the findings would evolve and change hands when privatized into what we now call the Internet. We also covered the emergence of Cybernetics, which Norbert Wiener defined in 1948 as a the systems-based science of communication and automatic control systems - and we covered the other individuals influential in its development. 

It’s easy to draw a straight line between that line of thinking and the evolution that led to the ARPAnet. In his book, Peters shows how Glushkov uncovered cybernetics and came to the same conclusion that Licklider had, that the USSR needed a network that would link the nation. He was a communist and so the network would help automate the command economy of the growing Russian empire, an empire that would need more people managing it than there were people in Russia, if the bureaucracy continued to grow at a pace that was required to do the manual computing to get resources to factories and good to people. He had this epiphany after reading Wiener’s book on cybernetics - which had been hidden away from the Russian people as American propaganda. 

Glushkov’s contemporary, Anatoly Kitov had come to the same realization back in 1959. By 1958 the US had developed the Semi-Automatic Ground Environment, or SAGE. The last of that equipment went offline in 1984. The environment was a system of networked radar equipment that could be used as eyes in the sky to detect a Soviet attack. It was crazy to think about that a few years ago, but think today about a radar system capable of detecting influence in elections and maybe notsomuch any more. SAGE linked computers built by IBM. 

The Russians saw defense as cost prohibitive. Yet at Stalin’s orders they began to develop a network of radar sites in a network of sorts around Moscow in the early 50s, extending to Leningrad. They developed the BESM-1 mainframe in 1952 to 1953 and while Stalin was against computing and western cybernetic doctrine outside of the military, as in America, they were certainly linking sites to launch missiles. Lev Korolyov worked on BESM and then led the team to build the ballistic missile defense system. 

So it should come as no surprise that after a few years Soviet scientists like Glushkov and Kitov would look to apply military computing know-how to fields like running the economics of the country. 

Kitov had seen technology patterns before they came. He studied nuclear physics before World War II, then rocketry after the war, and he then went to the Ministry of Defence at Bureau No 245 to study computing. This is where he came in contact with Wiener’s book on Cybernetics in 1951, which had been banned in Russia at the time. Kitov would work on ballistic missiles and his reputation in the computing field would grow over the years. Kitov would end up with hundreds of computing engineers under his leadership, rising to the rank of Colonel in the military. 

By 1954 Kitov was tasked with creating the first computing center for the Ministry of Defence. They would take on the computing tasks for the military. He would oversee the development of the M-100 computer and the transition into transistorized computers. By 1956 he would write a book called “Electronic Digital Computers” and over time, his views on computers grew to include solving problems that went far beyond science and the military. Running company

Kitov came up with the Economic Automated Management System in 1959. This was denied because the military didn’t want to share their technology. Khrushchev sent Brezhnev, who was running the space program and an expert in all things tech, to meet with Kitov. Kitov was suggesting they use this powerful network of computer centers to run the economy when the Soviets were at peace and the military when they were at war. 

Kitov would ultimately realize that the communist party did not want to automate the economy. But his “Red Book” project would ultimately fizzle into one of reporting rather than command and control over the years. 

The easy answer as to why would be that Stalin had considered computers the tool of imperialists and that feeling continued with some in the communist party. The issues are much deeper than that though and go to the heart of communism. You see, while we want to think that communism is about the good of all, it is irrational to think that people will act ways in their own self-interest. Microeconomics and macroeconomics. And automating command certainly seems to reduce the power of those in power who see that command taken over by a machine. And so Kitov was expelled from the communist party and could no longer hold a command. 

Glushkov then came along recommending the National Automated System for Computation and Information Processing, or OGAS for short, in 1962. He had worked on computers in Kyiv and then moved to become the Director of the Computer Center in Ukraine at the Academy of Science. Being even more bullish on the rise of computing, Glushkov went further even added an electronic payment system on top of controlling a centrally planned economy. Computers were on the rise in various computer centers and other locations and it just made sense to connect them. And they did at small scales. 

As was done at MIT, Glushkov built a walled garden of researchers in his own secluded nerd-heaven. He too made a grand proposal. He too saw the command economy of the USSR as one that could be automated with a computer, much as many companies around the world were employing ERP solutions in the coming decades. 

The Glushkov proposal continued all the way to the top. They were able to show substantial return on investment yet the proposal to build OGAS was ultimately shot down in 1970 after years of development. While the Soviets were attempting to react to the development of the ARPAnet, they couldn’t get past infighting. The finance minister opposed it and flatly refused. There were concerns about which ministry the system would belong to and basically political infighting much as I’ve seen at many of the top companies in the world (and increasingly in the US government). 

A major thesis of the book is that the Soviet entrepreneurs trying to build the network acted more like capitalists than communists and Americans building our early networks acted more like socialists than capitalists. This isn’t about individual financial gains though. Glushkov and Kitov in fact saw how computing could automate the economy to benefit everyone. But a point that Peters makes in the book is centered around informal financial networks. Peters points out that Blat, the informal trading of favors that we might call a black market or corruption, was common place. An example he uses in the book is that if a factory performs at 101% of expected production the manager can just slide under the radar. But if they perform at 120% then those gains will be expected permanently and if they ever dip below the expected productivity, they might meet a poor fate. Thus Blat provides a way to trade goods informally and keep the status quo. A computer doing daily reports would make this kind of flying under the radar of Gosplan, or the Soviet State Planning Committee difficult. Thus factory bosses would likely inaccurately enter information into computers and further the Tolchachs, or pushers, of Blat. 

A couple of points I’d love to add onto those Peters made, which wouldn’t be obvious without that amazing last paragraph in the book. The first is that I’ve never read Bush, Licklider, or any of the early pioneers claim computers should run a macroeconomy. The closest thing that could run a capitalist economy. And the New York Stock Exchange would begin the process of going digital in 1966 when the Dow was at 990. The Dow sat at about that same place until 1982. Can you imagine that these days? Things looked bad when it dropped to 18,500. And the The London Stock Exchange held out going digital until 1986 - just a few years after the dow finally moved over a thousand. Think about that as it hovers around $26,000 today. And look at the companies and imagine which could get by without computers running their company - much less which are computer companies. There are 2 to 6 billion trades a day. It would probably take more than the population of Russia just to push those numbers if it all weren’t digital. In fact now, there’s an app (or a lot of apps) for that. But the point is, going back to Bush’s Memex, computers were to aid in human decision making. In a world with an exploding amount of data about every domain, Bush had prophesied the Memex would help connect us to data and help us to do more. That underlying tenant infected everyone that read his article and is something I think of every time I evaluate an investment thesis based on automation. 

There’s another point I’d like to add to this most excellent book. Computers developed in the US were increasingly general purpose and democratized. This led to innovative new applications just popping up and changing the world, like spreadsheets and word processors. Innovators weren’t just taking a factory “online” to track the number of widgets sold and deploying ICBMs - they were foundations for building anything a young developer wanted to build. The uses in education with PLATO, in creativity with Sketchpad, in general purpose languages and operating systems, in early online communities with mail and bulletin boards, in the democratization of the computer itself with the rise of the pc and the rapid proliferation with the introduction of games, and then the democratization of raw information with the rise of gopher and the web and search engines. Miniaturized and in our pockets, those are the building blocks of modern society. And the word democratization to me means a lot.

But as Peters points out, sometimes the Capitalists act like Communists. Today we close down access to various parts of those devices by the developers in order to protect people. I guess the difference is now we can build our own but since so many of us do that at #dayjob we just want the phone to order us dinner. Such is life and OODA loops.

In retrospect, it’s easy to see how technological determinism would lead to global information networks. It’s easy to see electronic banking and commerce and that people would pay for goods in apps. As the Amazon stock soars over $3,000 and what Jack Ma has done with Alibaba and the empires built by the technopolies at Amazon, Apple, Microsoft, and dozens of others. In retrospect, it’s easy to see the productivity gains. But at the time, it was hard to see the forest through the trees. The infighting got in the way. The turf-building. The potential of a bullet in the head from your contemporaries when they get in power can do that I guess. 

And so the networks failed to be developed in the USSR and ARPAnet would be transferred to the National Science Foundation in 1985, and the other nets would grow until it was all privatized into the network we call the Internet today, around the same time the Soviet Union was dissolved. As we covered in the episode on the history of computing in Poland, empires simply grow beyond the communications mediums available at the time. By the fall of the Soviet Union, US organizations were networking in a build up from early adopters, who made great gains in productivity increases and signaled the chasm crossing that was the merging of the nets into the Internet. And people were using modems to connect to message boards and work with data remotely. Ironically, that merged Internet that China has splinterneted and that Russia seems poised to splinter further. But just as hiding Wiener’s cybernetics book from the Russian people slowed technological determinism in that country, cutting various parts of the Internet off in Russia will slow progress if it happens.

The Soviets did great work on macro and micro economic tracking and modeling under Glushkov and Kitov. Understanding what you have and how data and products flow is one key aspect of automation. And sometimes even more important in helping humans make better-informed decisions. Chile tried something similar in 1973 under Salvador Allende, but that system failed as well. 

And there’s a lot to digest in this story. But that word progress is important. Let’s say that Russian or Chinese crackers steal military-grade technology from US or European firms. Yes, they get the tech, but not the underlying principals that led to the development of that technology. Just as the US and partners don’t proliferate all of their ideas and ideals by restricting the proliferation of that technology in foreign markets. Phil Zimmerman opened floodgates when he printed the PGP source code to enable the export of military-grade encryption. The privacy gained in foreign theaters contributed to greater freedoms around the world. And crime. But crime will happen in an oppressive regime just as it will in one espousing freedom. 

So for you hackers tuning in - whether you’re building apps, hacking business, or reingineering for a better tomorrow: next time you’re sitting in a meeting and progress is being smothered at work or next time you see progress being suffocated by a government, remember that those who you think are trying to hold you back either don’t see what you see, are trying to protect their own power, or they might just be trying to keep progress from outpacing what their constituents are ready for. And maybe those are sometimes the same thing, just from a different perspective. Because go fast at all costs not only leaves people behind but sometimes doesn’t build a better mousetrap than what we have today. Or, go too fast and like Kitov you get stripped of your command. No matter how much of a genius you, or your contemporary Glushkov are. The YouTube video called “Internet of Colonel Kitov” has a great quote: “pioneers are recognized by the arrows sticking out of their backs.” But hey, at least history was on their side! 

Thank you for tuning in to the History of Computing Podcast. We are so, so, so lucky to have you. Have a great day and I hope you too are on the right side of history!

From The Press To Cambridge Analytica


Welcome to the history of computing podcast. Today we’re going to talk about the use of big data in elections. But first, let’s start with a disclaimer. I believe that these problems outlined in this episode are apolitical. Given the chance to do so I believe most politicians (or marketers), despite their party, would have jumped on what happened with what is outlined in this podcast. Just as most marketers are more than happy to buy data, even when not knowing the underlying source of that data. No offense to the parties but marketing is marketing. Just as it is in companies. Data will be used to gain an advantage in the market. Understanding the impacts of our decisions and the values of others is an ongoing area of growth for all of us. Even when we have quotas on sales qualified leads to be delivered. 

Now let’s talk about data sovereignty. Someone pays for everything. The bigger and more lucrative the business, the more that has to be paid to keep organizations necessarily formed to support an innovation alive. If you aren’t paying for a good or service, then you yourself are the commodity. In social media, this is represented in the form of a company making their money from data about you and from the ads you see. The only other viable business model used is to charge for the service, like a Premium LinkedIn account as opposed to the ones used by us proletariat.  

Our devices can see so much about us. They know our financial transactions, where we go, what we buy, what content we consume, and apparently what our opinions and triggers are. Sometimes, that data can be harnessed to show us ads. Ads about things to buy. Ads about apps to install. Ads about elections.

My crazy uncle Billy sends me routine invitations to take personality quizzes. No thanks. Never done one. Why?

I worked on one of the first dozen Facebook apps. A simple rock, paper, scissors game. At the time, it didn’t at all seem weird to me as a developer that there was an API endpoint to get a list of friends from within my app. It’s how we had a player challenge other players in a game. It didn’t seem weird that I could also get a list of their friends. And it didn’t seem weird that I could get a lot of personal data on people through that app. I mean I had to display their names and photos when they played a game, right? I just wanted to build a screen to invite friends to play the app. I had to show a photo so you could see who you were playing. And to make the game more responsive I needed to store the data in my own SQL tables. It didn’t seem weird then. I guess it didn’t seem weird until it did. 

What made it weird was the introduction of highly targeted analytics and retargeting. I have paid for these services. I have benefited from these services in my professional life and to some degree I have helped develop some. I’ve watched the rise of large data warehouses. I’ve helped buy phone numbers and other personally identifiable information of humans and managed teams of sellers to email and call those humans. Ad targeting, drip campaigns, lead scoring, and providing very specific messages based on attributes you know about a person are all a part of the modern sales and marketing machine at any successful company. 

And at some point, it went from being crazy how much information we had about people to being - well, just a part of doing business. The former Cambridge Analytica CEO Alexander Nix once said “From Mad Men in the day to Math Men today.” From Don Draper to Betty’s next husband Henry (a politician) there are informal ties between advertising, marketing and politics. Just as one of the founders of SCL, the parent company of Cambridge Analytica had ties with royals having dated one and gone to school with others in political power.

But there have also always been formal ties. Public Occurrences Both Foreign and Domestick was the first colonial newspaper in America and was formally suppressed after its first edition in 1690. But the Boston News-Letter was formally subsidized in 1704. Media and propaganda. Most newspapers were just straight up sponsoring or sponsored by a political platform in the US until the 1830s. To some degree, that began with Ben Franklin’s big brother James Franklin in the early 1700s with the New England Courant. Franklin would create partnerships for content distribution throughout the colonies, spreading his brand of moral virtue. And the papers were stoking the colonies into revolution. And after the revolution Hamilton instigated American Minerva as the first daily paper in New York - to be a Federalist paper. Of course, the Jeffersonian Republicans called him an “incurable lunatic.” And yet they still guaranteed us the freedom of press. 

And that freedom grew to investigative reporting, especially during the Progressive Era, from the tail end of the 19th century up until the start of the roaring twenties. While Teddy Roosevelt would call them Muckrakers, their tradition extends from Nellie Bly and Fremont Older to Seymour Hersch, Kwitny, even the most modern Woodward and Bernstein. They led to stock reform, civic reforms, uncovering corruption, exposing crime in labor unions, laying bare monopolistic behaviors, improving sanitation and forcing us to confront racial injustices. They have been independent of party affiliation and yet constantly accused over the last hundred years of being against whomever is in power at the time.

Their journalism extended to radio and then to television. I think the founders would be proud of how journalism evolved and also unsurprised as to some of the ways it has devolved. But let’s get back to someone is always paying. The people can subscribe to a newspaper but the advertising is a huge source of revenue. With radio and television flying across airwaves and free, advertising exclusively became what paid for content and the ensuing decades became the golden age of that industry. And politicians bought ads. If there is zero chance a politician can win a state, why bother buying ads in that state. That’s a form of targeting with a pretty simple set of data. 

In Mad Men, Don is sent to pitch the Nixon campaign. There has always been a connection between disruptive new mediums and politics. Offices have been won by politicians able to gain access to early printing presses to spread their messages to the masses, those connected to print media to get articles and advertising, by great orators at the advent of the radio, and by good-looking charismatic politicians first able to harness television - especially in the Mad Men fueled ad exec inspired era that saw the Nixon campaigns in the 60s. The platforms to advertise become ubiquitous, they get abused, and then they become regulated. After television came news networks specifically meant to prop up an agenda, although unable to be directly owned by a party. None are “fake news” per se, but once abused by any they can all be cast in doubt, even if most especially done by the abuser. 

The Internet was no different. The Obama campaign was really the first that leveraged social media and great data analytics to orchestrate what can be considered to really be the first big data campaign. And after his campaign carried him to a first term the opposition was able to make great strides in countering that. Progress is often followed by lagerts who seek to subvert the innovations of an era. And they often hire the teams who helped with previous implementations. 

Obama had a chief data scientist, Rayid Ghani. And a chief analytics officer. They put apps in the hands of canvassers and they mined Facebook data from Facebook networks of friends to try and persuade voters. They scored voters and figured out how to influence votes for certain segments. That was supplemented by thousands of interviews and thousands of hours building algorithms. By 2012 they were pretty confident they knew which of the nearly 70 million Americans that put him in the White House. And that gave the Obama campaign the confidence to spend $52 million in online ads against Romney’s $26 million to bring home the win. And through all that the Democratic National Committee ended up with information on 180 million voters.

That campaign would prove the hypothesis that big data could win big elections. Then comes the 2016 election. Donald Trump came from behind, out of a crowded field of potential Republican nominees, to not only secure the Republican nomination for president but then to win that election. He won the votes to be elected in the electoral college while losing the popular vote. That had happened when John Quincy Adams defeated Andrew Jackson in 1824, although it took a vote in the House of Representatives to settle that election. Rutherford B Hayes defeated Samuel Tilden in 1876 in the electoral college but lost the popular vote. And it happened again when Grover Cleveland lost to Benjamin Harrison in 1888. And in 2000 when Bush beat Gore. And again when Trump beat Hillary Clinton. And he solidly defeated her in the electoral college with 304 to her 227 votes. 

Every time it happens, there seems to be plenty of rhetoric about changing the process. But keep in mind the framers built the system for a reason: to give the constituents of every state a minimum amount of power to elect officials that represent them. Those two represent the number of senators for the state and then the state receives one for each member of the house of representatives. States can choose how the electors are instructed to vote. Most states (except Maine and Nebraska) have all of their electors vote for a single ticket, the one that won the state. Most of the states instruct their elector to vote based on who won the popular vote for their state. Once all the electors cast their votes, Congress counts the votes and the winner of the election is declared. 

So how did he come from behind? One easy place to blame is data. I mean, we can blame data for putting Obama into the White House, or we can accept a message of hope and change that resonated with the people. Just as we can blame data for Trump or accept a message that government wasn’t effective for the people. Since this is a podcast on technology, let’s focus on data for a bit. And more specifically let’s look at the source of one trove of data used for micro-targeting, because data is a central strategy for most companies today. And it was a central part of the past four elections. 

We see the ads on our phones so we know that companies have this kind of data about us. Machine learning had been on the rise for decades. But a little company called SCL was started In 1990 as the Behavioral Dynamics Institute by a British ad man named Nigel Oakes after leaving Saatchi & Saatchi. Something dangerous is when you have someone like him make this kind of comparison “We use the same techniques as Aristotle and Hitler.”

Behavioural Dynamics studied how to change mass behavior through strategic communication - which US Assistant Secretary of Defense for Public Affairs Robert Hastings described in 2008 as the “synchronization of images, actions, and words to achieve a desired effect.” Sounds a lot like state conducted advertising to me. And sure, reminiscent of Nazi tactics. You might also think of it as propaganda. Or “pay ops” in the Vietnam era. And they were involved in elections in the developing world. In places like the Ukraine, Italy, South Africa, Albania, Taiwan, Thailand, Indonesia, Kenya, Nigeria, even India. And of course in the UK. Or at least on behalf of the UK and whether directly or indirectly, the US. 

After Obama won his second term, SCL started Cambridge Analytica to go after American elections. They began to assemble a similar big data warehouse. They hired people like Brittany Kaiser who’d volunteered for Obama and would become director of Business Development. 

Ted Cruz used them in 2016 but it was the Trump campaign that was really able to harness their intelligence. Their principal investor was Robert Mercer, former CEO of huge fund Renaissance Technologies. He’d gotten his start at IBM Research working on statistical machine translation and was recruited in the 90s to apply data modeling and computing resources to financial analysis. This allowed them to earn nearly 40% per year on investments. An American success story. He was key in the Brexit vote, donating analytics to Nigel Farage and an early supporter of Breitbart News. 

Cambridge Analytica would get involved in 44 races in the 2014 midterm elections. By 2016, Project Alamo was running at a million bucks a day in Facebook advertising. In the documentary The Great Hack, they claim this was to harvest fear. And Cambridge Analytica allowed the Trump campaign to get really specific with targeting. So specific that they were able to claim to have 5,000 pieces of data per person. 

Enter whistleblower Christopher Wylie who claims over a quarter million people took a quick called “This is Your Digital Life” which exposed the data of around 50 million users. That data was moved off Facebook servers and stored in a warehouse where it could be analyzed and fields merged with other data sources without the consent of the people who played the game or the people who were in their friend networks. Dirty tactics. 

Alexander Nix admitted to using bribery stings and prostitutes to influence politicians. So it should be as no surprise that they stole information on well over 50 million Facebook users in the US alone. And of course then they lied about it when being investigated by the UK for Russian interference and fake news in the lead to the Brexit referendum. Investigations go on. 

After investigations started piling up, some details started to emerge. This is Your Digital Life was written by Dr Spectre. It gets better. That’s actually Alexandr Kogan for Cambridge Analytica. He had received research funding from the University of St Petersburg and was then lecturing at the Psychology department at the University of Cambridge. It would be easy to make a jump that he was working for the Russkies but here’s the thing, he also got research funding from Canada, China, the UK, and the US. He claimed he didn’t know what the app would be used for. That’s crap. When I got a list of friends and friends friends who I could spider through, I parsed the data and displayed it on a screen as a pick list. He piped it out to a data warehouse. When you do that you know exactly what’s happening with it. 

So the election comes and goes. Trump wins. And people start asking questions. As they do when one party wins the popular vote and not the electoral college. People misunderstand and think you can win a district due to redistricting in most states and carry the state without realizing most are straight majority. Other Muckraker reporters from around the world start looking into Brexit and US elections and asking questions. 

Enter Paul-Olivier Dehaye. While an assistant professor at the University of Zurich he was working on Coursera. He started asking about the data collection. The word spread slowly but surely. Then enter American professor David Carroll, who sued Cambridge Analytica to see what data they had on him. Dehaye contributed to his Subject Access request and suddenly the connections between Cambridge Analytica and Brexit started to surface, as did the connection between Cambridge Analytica and the Trump campaign, including photos of the team working with key members of the campaign. And ultimately of the checks cut.  Cause there’s always a money trail. 

I’ve heard people claim that there was no interference in the 2016 elections, in Brexit, or in other elections. Now, if you think the American taxpayer didn’t contribute to some of the antics by Cambridge Analytica before they turned their attention to the US, I think we’re all kidding ourselves. And there was Russian meddling in US elections and illegally obtained materials were used, whether that’s emails on servers then leaked to WikiLeaks or stolen Facebook data troves. Those same tactics were used in Brexit. And here’s the thing, it’s been this way for a long, long time - it’s just so much more powerful today than ever before. And given how fast data can travel, every time it happens, unless done in a walled garden, the truth will come to light. 

Cambridge Analytica kinda’ shut down in 2017 after all of this came to light. What do I mean by kinda? Well, former employees setup a company called Emerdata Limited who then bought the SCL companies. Why? There were contracts and data. They brought on the founder of Blackwater, Mercer’s daughter Rebekah, and others to serve on the board of directors and she was suddenly the “First Lady of the Alt-Right.” Whether Emerdata got all of the company, they got some of the scraped data from 87 million users. No company with the revenues they had goes away quietly or immediately. 

Robert Mercer donated the fourth largest amount in the 2016 presenting race. He was also the one who supposedly introduced Trump to Steve Bannon. In the fallout of the scandals if you want to call them that, Mercer stepped down from Renaissance and sold his shares of Breitbart to his daughters. Today, he’s a benefactor of the Make America Number 1 Super PAC and remains one of the top donors to conservative causes. 

After leaving Cambridge Analytica, Nix was under investigations for a few years before settling with the Federal Trade Commission and agreed to delete illegally obtained data and settled with the UK Secretary of State that he had offered unethical services and agreed to not act as a director of another company for at least 7 years. 

Brittany Kaiser flees to Thailand and is now a proponent of banning political advertising on Facebook and being able to own your own data. 

Facebook paid a $5 billion fine for data privacy violations and have overhauled their APIs and privacy options. It’s better but not great. I feel like they’re doing as well as they can and they’ve been accused of tampering with feeds by conservative and liberal media outlets alike. To me, if they all hate you, you’re probably either doing a lot right, or basically screwing all of it up. I wouldn’t be surprised to see fines continue piling up. 

Kogan left the University of Cambridge in 2018. He founded Philometrics, a firm applying big data and AI to surveys. Their website isn’t up as of the recording of this episode. His Tumblr seems to be full of talk about acne and trying to buy cheat codes for video games these days. 

Many, including Kogan, have claimed that micro-targeting (or psychographic modeling techniques) against large enhanced sets of data isn’t effective. If you search for wedding rings and I show you ads for wedding rings then maybe you’ll buy my wedding rings. If I see you bought a wedding ring, I can start showing you ads for wedding photographers and bourbon instead. Hey dummy, advertising works. Disinformation works. Analyzing and forecasting and modeling with machine learning works. Sure, some is snake oil. But early adopters made billions off it. Problem is, like that perfect gambling system, you wouldn’t tell people about something if it means you lost your edge. Sell a book about how to weaponize a secret and suddenly you probably are selling snake oil.  

As for regulatory reactions, can you say GDPR and all of the other privacy regulations that have come about since? Much as Sarbanes-Oxley introduced regulatory controls for corporate auditing and transparency, we regulated the crap out of privacy. And by regulated I mean a bunch of people that didn’t understand the way data is stored and disseminated over APIs made policy to govern it. But that’s another episode waiting to happen. Suffice it to say the lasting impact to the history of computing is both the regulations on privacy and the impact to identity providers and other API endpoints, were we needed to lock down entitlements to access various pieces of information due to rampant abuses. 

So here’s the key question in all of this: did the data help Obama and Trump win their elections? It might have moved a few points here and there. But it was death by a thousand cuts. Mis-steps by the other campaigns, political tides, segments of American populations desperately looking for change and feeling left behind while other segments of the population got all the attention, foreign intervention, voting machine tampering, not having a cohesive Opponent Party and so many other aspects of those elections also played a part. And as Hari Seldon-esque George Friedman called it in his book, it’s just the Storm Before the Calm. 

So whether the data did or did not help the Trump campaign, the next question is whether using the Cambridge Analytica data was wrong? This is murky. The data was illegally obtained. The Trump campaign was playing catchup with the maturity of the data held by the opposition. But the campaign can claim they didn’t know that the data was illegally obtained. It is illegal to employ foreigners in political campaigns and Bannon was warned about that. And then-CEO Nix was warned. But they were looking to instigate a culture war according to Christopher Wylie who helped found Cambridge Analytica. And look around, did they? 

Getting data models to a point where they have a high enough confidence interval that they are weaponizable takes years. Machine learning projects are very complicated, very challenging, and very expensive. And they are being used by every political campaign now insofar as the law allows. To be honest though, troll farms of cheap labor are cheaper and faster. Which is why three more got taken down just a month before the recording of this episode. But AI doesn’t do pillow talk, so eventually it will displace even the troll farm worker if only ‘cause the muckrakers can’t interview the AI. 

So where does this leave us today? Nearly every time I open Facebook, I see an ad to vote for Biden or an ad to vote for Trump. The US Director of National Intelligence recently claimed the Russians and Iranians were interfering with US elections. To do their part, Facebook will ban political ads indefinitely after the polls close on Nov. 3. They and Twitter are taking proactive steps to stop disinformation on their networks, including by actual politicians. And Twitter has actually just outright banned political ads. 

People don’t usually want regulations. But just as political ads in print, on the radio, and on television are regulated - they will need to be regulated online as well. As will the use of big data. The difference is the rich metadata collected in micro-targeting, the expansive comments areas, and the anonymity of those commenters. But I trust that a bunch of people who’ve never written a line of code in their life will do a solid job handing down those regulations. Actually, the FEC probably never built a radio - so maybe they will.

So as the election season comes to a close, think about this. Any data from large brokers about you is fair game. What you’re seeing in Facebook and even the ads you see on popular websites are being formed by that data. Without it, you’ll see ads for things you don’t want. Like the Golden Girls Season 4 boxed set. Because you already have it. But with it, you’ll get crazy uncle Billy at the top of your feed talking about how the earth is flat. Leave it or delete it, just ask for a copy of it so you know what’s out there. You might be surprised, delighted, or even a little disgusted by that site uncle Billy was looking at that one night you went to bed early.

But don’t, don’t, don’t think that any of this should impact your vote. Conservative, green, liberal, progressive, communist, social democrats, or whatever you ascribe to. In whatever elections in your country or state or province or municipality. Go vote. Don’t be intimated. Don’t let fear stand in the way of your civic duty. Don’t block your friends with contrary opinions. If nothing else listen to them. They need to be heard. Even if uncle Billy just can’t be convinced the world is round. I mean, he’s been to the beach. He’s been on an airplane. He has GPS on his phone… And that site. Gross.

Thank you for tuning in to this episode of the history of computing podcast. We are so, so, so lucky to have you. Have a great day. 

The Troubled History Of Voting Machines


Voters elect officials in representative democracies who pass laws, interpret laws, enforce laws, or appoint various other representatives to do one of the above. The terms of elected officials, the particulars of their laws, the structure of courts that interpret laws, and the makeup of the bureaucracies that are necessarily created to govern are different in every country. 

In China, the people elect the People’s Congresses who then elect the nearly 3,000 National People’s Congress members, who then elect the Present and State Councils. The United States has a more direct form of democracy and the people elect a House of Represenatives, a Senate, and a president who the founders intentionally locked into a power struggle to keep any part of the government from becoming authoritarian. Russia is setup similar. In fact, the State Duma, like the House in the US are elected by the people and the 85 States, or federal subjects, then send a pair of delegates to a Federal Council, like the Senate in the US, which has 170 members. It works similarly in many countries. Some, like England, still provide for hereditary titles, such as the House of Lords - but even there, the Sovereign - currently Queen Elizabeth the second, nominates a peer to a seat. That peer is these days selected by the Prime Minister. It’s weird but I guess it kinda’ works. 

Across democracies, countries communist, socialist, capitalist, and even the constitutional monarchies practice elections. The voters elect these representatives to supposedly do what’s in the best interest of the constituents. That vote cast is the foundation of any democracy. We think our differences are greater than they are, but it mostly boils down to a few percentages of tax and a slight difference in the level of expectation around privacy, whether that expectation is founded or not. 

2020 poses a turning point for elections around the world. After allegations of attempted election tampering in previous years, the president of the United States will be voted on. And many of those votes are being carried out by mail. But others will be performed in person at polling locations and done on voting machines. 

At this point, I would assume that given how nearly every other aspect of American life has a digital equivalent, that I could just log into a web portal and cast my vote. No. That is not the case. In fact, we can’t even seem to keep the voting machines from being tampered with. And we have physical control over those! So how did we get to such an awkward place, where the most important aspect of a democracy is so backwater. Let’s start 

Maybe it’s ok that voting machines and hacking play less a role than they should. Without being political, there is no doubt that Russia and other foreign powers have meddled in US elections. In fact, there’s probably little doubt we’ve interfered in theirs. Russian troll farms and disinformation campaigns are real. Paul Manafort maintained secret communications with the Kremlin. Former US generals were brought into the administration either during or after the election to make a truce with the Russians. And then there were the allegations about tampering voting machines. Now effectively stealing information about voters from Facebook using insecure API permissions. I get that. Disinformation goes back to posters in the time of Thomas Jefferson. I get that too. 

But hacking voting machines. I mean, these are vetted, right? For $3,000 to $4,500 each and when bought in bulk orders of 16,000 machines like Maryland bought from Diebold in 2005, you really get what you pay for, right? Wait, did you say 2005? Let’s jump forward to 2017. That’s the year DefCon opened the Voting Machine Hacking Village. And in 2019 not a single voting machine was secured. In fact, one report from the conference said “we fear that the 2020 presidential elections will realize the worst fears only hinted at during the 2016 elections: insecure, attacked, and ultimately distrusted.”

I learned to pick locks, use L0phtCrack, run a fuzzer, and so much more at DefCon. Now I guess I’ve learned to hack elections. So again, every democracy in the world has one thing it just has to get right, voting. But we don’t. Why? Before we take a stab at that, let’s go back in time just a little. 

The first voting machine used in US elections was a guy with a bible. This is pretty much how it went up until the 1900s in most districts. People walked in and told an election official their vote, the votes were tallied on the honor of that person, and everyone got good and drunk. People love to get good and drunk. Voter turnout was in the 85 percent range. Votes were logged in poll books. And the person was saying the name of the official they were voting for with a poll worker writing their name and vote into a pollbook. There was no expectation that the vote would be secret. Not yet at least. Additionally, you could campaign at the polling place - a practice now illegal in most places. Now let’s say the person taking the votes fudged something. There’s a log. People knew each other. Towns were small. Someone would find out. 

Now digitizing a process usually goes from vocal or physical to paper to digital to database to networked database to machine learning. It’s pretty much the path of technological determinism. As is failing because we didn't account for adjacent advancements in technology when moving a paper process to a digital process. We didn't refactor around the now-computational advances.

Paper ballots showed up in the 1800s. Parties would print small fliers that looked like train tickets so voters could show up and drop their ballot off. Keep in mind, adult literacy rates still weren’t all that high at this point. One party could print a ticket that looked kinda’ like the others. All kinds of games were being played.  We needed a better way. 


The 1800s were a hotbed of invention. 1838 saw the introduction of a machine where each voter got a brass ball which was then dropped in machine that used mechanical counters to increment a tally. Albert Henderson developed a precursor to a computer that would record votes using a telegraph that printed ink in a column based on which key was held down. This was in 1850 with US Patent 7521. Edison took the idea to US Patent 90,646 and automated the counters in 1869. Henry Spratt developed a push-button machine. Anthony Beranek continued on with that but made one row per office and reset after the last voter, similar to how machines work today. 


Jacob Meyers built on Berenek’s work and added levers in 1889 and Alfred Gillespie made the levered machine programmable. He and others formed the US Standard Voting Machine Company and slowly grew it. But something was missing and we’ll step back a little in time. Remember those tickets and poll books? They weren’t standardized. 


The Australians came up with a wacky idea in 1858 to standardize on ballots printed by the government, which made it to the US in 1888. And like many things in computing, once we had a process on paper, the automation of knowledge work, or tabulating votes would soon be ready to take into computing. Herman Hollerith brought punched card data processing to the US Census in 1890 and punch cards - his company would merge with others at the time to form IBM. 


Towards the end of the 1890s John McTammany had aded the concept that voters could punch holes in paper to cast votes and even went so far as to add a pneumatic tabulation. They were using rolls of paper rather than cards. And so IBM started tabulating votes in 1936 with a dial based machine that could count 400 votes a minute from cards. Frank Carrell at IBM got a patent for recording ballot choices on standardized cards. The stage was set for the technology to meet paper. By 1958 IBM had standardized punch cards to 40 columns and released the Port-A-Punch for so people in the field could punch information into a card to record findings and then bring it back to a computer for processing. Based on that, Joseph Harris developed the Votomatic punched-cards in 1965 and IBM  licensed the technology. In the meantime, a science teacher Reynold Johnson had developed Mark Sense in the 1930s, which over time evolved into optical mark recognition, allowing us to fill in bubbles with a pencil. So rather than punch holes we could vote by filling in a bubble on a ballot.


All the pieces were in place and the technology slowly proliferated across the country, representing over a third of votes when Clinton beat Dole and Ross Perot in 1996. 


And then 2000 came. George W. Bush defeated Al Gore in a bitterly contested and narrow margin. It came down to Florida and issues with the ballots there. By some tallies as few as 300 people decided the outcome of that election. Hanging chads are little pieces of paper that don’t get punched out of a card. Maybe unpunched holes in just a couple of locations caused the entire election to shift between parties. You could get someone drunk or document their vote incorrectly when it was orally provided in the early 1800s or provide often illiterate people with mislabeled tickets prior to the Australian ballots. But this was the first time since the advent of the personal computer, when most people in the US had computers in their homes and when the Internet bubble was growing by the day that there was a problem with voting ballots and suddenly people started wondering why were still using paper. 


The answer isn’t as simple as the fact that the government moves slowly. I mean, the government can’t maintain the rate of technical innovation and progress anyways. But there are other factors as well. One is secrecy. Anywhere that has voting will eventually have some kind of secret ballots. This goes back to the ancient greeks but also the French Revolution. Secret ballots came to the UK in the 1840s with the Chartists and to the US after the 1884 election. As the democracies matured, the concept of voting rights matured and secret ballots were part of that. Making sure a ballot is secret means we can’t just allow any old person to look at a ballot. 


Another issue is decentralization. Each state selects their own machines and system and sets dates and requirements. We see that with the capacity and allocation of mail-in voting today. 


Another issue is cost. Each state also has a different budget. Meaning that there are disparities between how well a given state can reach all voters. When we go to the polls we usually work with volunteers. This doesn’t mean voting isn’t big business. States (and countries) have entire bureaucracies around elections. Bureaucracies necessarily protect themselves. 


So why not have a national voting system? Some countries do. Although most use electronic voting machines in polling places. But maybe something based on the Internet? Security. Estonia tried a purely Internet vote and due to hacking and malware it was determined to have been a terrible idea. That doesn’t mean we should not try again. 


The response to the 2000 election results was the Help America Vote Act of 2002 to define standards managed by the Election Assistance Commission in the US. The result was the proliferation of new voting systems. ATM machine maker Diebold entered the US election market in 2002 and quickly became a large player. 


The CEO ended up claiming he was “committed to helping Ohio deliver its electoral votes to” Bush. They accidentally leaked their source code due to a misconfigured server and they installed software patches that weren’t approved. In short, it was a typical tech empire that grew too fast and hand issues we’ve seen with many companies. Just with way more on the line. After a number of transitions between divisions and issues, the business unit was sold to Election Systems & Software, now with coverage over 42 states. And having sold hundreds of thousands of voting machines, they now have over 60% of the market share in the us. That company goes back to the dissolution of a ballot tabulation division of Westinghouse and the Votronic. They are owned by a private equity firm called the McCarthy Group. 


They are sue-happy though and stifling innovation. The problems are not just with ES&S. Hart InterCivic and Dominion are the next two biggest competitors, with equal issues. And no voting machine company has a great track record with security. They are all private companies. They have all been accused of vote tampering. None of that has been proven. They have all had security issues.


In most of these episodes I try to focus on the history of technology or technocratic philosophy and maybe look to the future. I rarely offer advice or strategy. But there are strategies not being employed. 


The first strategy is transparency. In life, I assume positive intent. But transparency is really the only proof of that. Any company developing these systems should have transparent financials, provide transparency around the humans involved, provide transparency around the source code used, and provide transparency around the transactions, or votes in this case, that are processed. In an era of disinformation and fake news, transparency is the greatest protection of democracy. 


Providing transparency around financials can be a minefield. Yes, a company should make a healthy margin to continue innovating. That margin funds innovators and great technology. Financials around elections are hidden today because the companies are private. Voting doesn’t have to become a public utility but it should be regulated. 


Transparency of code is simpler to think through. Make it open source. Firefox gave us an open source web browser. ToR gave us a transparent anonymity. The mechanisms with which each transaction occurs is transparent and any person with knowledge of open source systems can look for flaws in the system. Those flaws are then corrected as with most common programming languages and protocols by anyone with the technical skills to do so. I’m not the type that thinks everything should be open source. But this should be. 


There is transparency in simplicity.  The more complex a system the more difficult to unravel. The simpler a program, the easier for anyone with a working knowledge of programming to review and if needed, correct. So a voting system should be elegant in simplicity.


Verifiability. We could look at poll books in the 1800s and punch the vote counter in the mouth if they counted our vote wrong. The transparency of the transaction was verifiable. Today, there are claims of votes being left buried in fields and fraudulent voters. Technologies like blockchain can protect against that much as currency transactions can be done in bitcoin. I usually throw up a little when I hear the term blockchain bandied about by people who have never written a line of code. Not this time. 


Let’s take hashing as a fundamental building block. Let’s say you vote for a candidate and the candidate is stored as a text field, or varchar, that is their name (or names) and the position they are running for. We can easily take all of the votes cast by a voter, store them in a json blob, commit them to a database, add a record in a database that contains the vote supplied, and then add a block in chain to provide a second point of verification. The voter would receive a guid randomly assigned and unique to them, thus protecting the anonymity of the vote. The micro-services here are to create a form for them to vote, capture the vote, hash the vote, commit the vote to a database, duplicate the transaction into the voting blockchain, and allow for vote lookups. Each can be exposed from an API gateway that allows systems built by representatives of voters at the federal, state, and local levels to lookup their votes. 


We now have any person voting capable of verifying that their vote was counted. If bad data is injected at the time of the transaction the person can report the voter fraud and a separate table connecting vote GUIDs to IP addresses or any other PII can be accessed only by the appropriate law enforcement and any attempt by law enforcement to access a record should be logged as well. Votes can be captured with web portals, voting machines that have privileged access, by 1800s voice counts, etc.


Here we have a simple and elegant system that allows for transparency, verifiability, and privacy. But we need to gate who can cast a vote. I have a PIN to access by IRS returns using my social security number or tax ID. But federal elections don’t require paying taxes. Nextdoor sent a card to my home and I entered a PIN printed on the card on their website. But that system has many a flaw. Section 303 of the Help America Vote Act of 2002 compels the State Motor Vehicle Office in each state to validate the name, date of birth, Social Security Number, and whether someone is alive. Not every voter drives. Further, not every driver meets voting requirements. And those are different per state. 


And so it becomes challenging to authenticate a voter. We do so in person, en masse, at every election due to the the staff and volunteers of various election precincts. In Minnesota I provided my drivers license number when I submitted my last ballot over the mail. If I moved since the last time I voted I also need a utility bill to validate my physical address. A human will verify that. Theoretically I could vote in multiple precincts if I were able to fabricate a paper trail to do so. If I did I would go to prison. 


Providing a web interface unless browsers support a mechanism to validate the authenticity of the source and destination is incredibly dangerous. Especially when state sponsored actors as destinations have been proven to be able to bypass safeguards such as https. And then there’s the source. It used to be common practice to use Social Security Numbers or cards as a form of verification for a lot of things. That isn’t done any more due to privacy concerns and of course due to identity theft. 


You can’t keep usernames and passwords in a database any more. So the only real answer here is a federated identity provider. This is where OAuth, OpenID Connect, and/or SAML come into play. This is a technology that retains a centralized set of information about people. Other entities then tie into the centralized identity sources and pull information from them. The technology they use to authenticate and authorize users is then one of the protocols mentioned. 


I’ve been involved in a few of these projects and to be honest, they kinda’ all suck. Identities would need to be created and the usernames and passwords distributed. This means we have to come up with a scheme that everyone in the country (or at least the typically ill-informed representatives we put in place to make choices on our behalf) can agree on. And even if a perfect scheme for usernames is found there’s crazy levels of partisanship. The passwords should be complex but when dealing with all of the factors that come into play it’s hard to imagine consensus being found on what the right level is to protect people but also in a way passwords can be remembered. 


The other problem with a federated identity is privacy. Let’s say you forget your password. You need information about a person to reset it. There’s also this new piece of information out there that represents yet another piece of personally identifiable information. Why not just use a social security number? That would require a whole other episode to get into but it’s not an option. Suddenly if date of birth, phone number (for two factor authentication), the status of if a human is alive or not, possibly a drivers license number, maybe a social security number in a table somewhere to communicate with the Social Security databases to update the whole alive status. It gets complicated fast. It’s no less private that voter databases that have already been hacked in previous elections though. 


Some may argue to use biometric markers instead of all the previous whatnot. Take your crazy uncle Larry who thinks the government already collects too much information about him and tells you so when he’s making off-color jokes. Yah, now tell him to scan his eyeball or fingerprint into the database. When he’s done laughing at you, he may show you why he has a conceal and carry permit. 


And then there’s ownership. No department within an organization I’ve seen wants to allow an identity project unless they get budget and permanent head count. And no team wants another team to own it. When bureaucracies fight it takes time to come to the conclusion that a new bureaucracy needs to be formed if we’re going anywhere. Then the other bureaucracies make the life of the new one hard and thus slow down the whole process. Sometimes needfully, sometimes accidentally, and sometimes out of pure spite or bickering over power. The most logical bureaucracy in the federal government to own such a project would be the social security administration or the Internal Revenue Service.  


Some will argue states should each have their own identity provider. We need one for taxes, social security, benefits, and entitlement programs. And by the way, we’re at a point in history when people move between states more than ever. If we’re going to protect federal and state elections, we need a centralized provider of identities. And this is going to sound crazy, but the federal government should probably just buy a company who already sells an IdP (like most companies would do if they wanted to build one) rather than contract with one or build their own. If you have to ask why, you’ve never tried to build one yourself or been involved in any large-scale software deployments or development operations at a governmental agency. I could write a book on each. 


There are newer types of options. You could roll with an IndieAuth Identity Provider, which is a decentralized approach, but that’s for logging into apps using Facebook or Apple or Google - use it to shop and game, not to vote. NIST should make the standards, FedRAMP should provide assessment, and we can loosely follow the model of the European self-sovereign identity framework or ESSIF but build on top of an existing stack so we don’t end up taking 20 years to get there. 

Organizations that can communicate with an identity provider are called Service Providers. Only FedRAMP certified public entities should be able to communicate with a federal federated identity provider. Let’s just call it the FedIdP. 

Enough on the identity thing. Suffice it to say, it’s necessary to successfully go from trusting poll workers to being able to communicate online. And here’s the thing about all of this: confidence intervals. What I mean by this is that we have gone from being able to verify our votes in poll books and being able to see other people in our communities vote to trusting black boxes built by faceless people whose political allegiances are unknown. And as is so often the case when the technology fails us, rather than think through the next innovation we retreat back to the previous step in the technological cycle: if that is getting stuck at localized digitization we retreat back to paper. If it is getting stuck at taking those local repositories online we would have retreated back to the localized digital repository. If we’re stuck at punch cards due to hanging chads then we might have to retreat back to voice voting. Each has a lower confidence interval than a verifiable and transparent online alternative. Although the chances of voter fraud by mail are still .00006%, close to a 5 9s.

We need to move forward. It’s called progress. The laws of technological determinism are such that taking the process online is the next step. And it’s crucial for social justice. I’ve over-simplified what it will take. Anything done on a national scale is hard. And time consuming. So it’s a journey that should be begun now.

In the meantime, there’s a DARPA prize. Given the involvement of a few key DARPA people with DefCon and the findings of voting machine security (whether that computers are online and potentially fallible or physically hackable or just plain bad) DARPA gave a prize to the organization that could develop a tamper proof, open-source voting machine. I actually took a crack at this, not because I believed it to be a way to make money but because after the accusations of interference in the 2016 election I just couldn’t not. Ultimately I decided this could be solved with an app in single app mode, a printer to produce a hash and a guid, and some micro-services but that the voting machine was the wrong place for the effort and that the effort should instead be put into taking voting online. 

Galois theory gives us a connection from field theory and group theory. You simplify field theory problems so they can be solved by group theory. And I’ve oversimplified the solution for this problem. But just as with studying the roots of polynomials, sometimes simplicity is elegance rather than hubris. In my own R&D efforts I struggle to understand when I’m exuding each. 

The 2020 election is forcing many to vote by mail. As with other areas that have not gotten the innovation they needed, we’re having to rethink a lot of things. And voting in person at a polling place should certainly be one. As should the cost of physically delivering those ballots and the human cost to get them entered. 

The election may or may not be challenged by luddites who refuse to see the technological determinism staring them in the face. This is a bipartisan issue. No matter who wins or loses the other party will cry foul. It’s their job as politicians. But it’s my job as a technologist to point out that there’s a better way. The steps I outlined in this episode might be wrong. But if someone can point out a better way, I’d like to volunteer my time and focus to propelling it forward. And dear listener, think about this. When progress is challenged what innovation can you bring or contribute to that helps keep us from retreating to increasingly analog methods. 

Herman Hollerith brought the punch card, which had been floating around since the Jacquard loom in 1801. Those were individuals who moved technology forward in fundamental ways. In case no one ever told you, you have even better ideas locked away in your head. Thank you for letting them out. And thank you for tuning in to this episode of the History of Computing Podcast. We are so, so lucky to have you.

The Intergalactic Memo That Was The Seed Of The Internet


JCR Licklider sent a memo called "Memorandum For Members and Affiliates of the Intergalactic Computer Network" in 1963 that is quite possibly the original spark that lit the bonfire called The ARPANet, that was the nascent beginnings of what we now called the Internet. In the memo, “Lick” as his friends called him, documented early issues in building out a time-sharing network of computers available to research scientists of the early 60s. 

The memo is a bit long so I’ll include quotes followed by explanations or I guess you might call them interpretations. Let’s start with the second paragraph:

The need for the meeting and the purpose of the meeting are things that I feel intuitively, not things that I perceive in clear structure. I am afraid that that fact will be too evident in the following paragraphs. Nevertheless, I shall try to set forth some background material and some thoughts about possible interactions among the various activities in the overall enterprise for which, as you may have detected in the above subject, I am at a loss for a name.

Intuition, to me, is important. Lick had attended conferences on cybernetics and artificial intelligence going back to the 40s. He had been MIT faculty and was working for a new defense research organization. He was a visionary. The thing is, let’s call his vision a hypothesis. During the 1960s, the Soviets would attempt to build multiple networks similar to ARPANet. Thing is, much like a modern product manager, he chunked the work to be done up and had various small teams tackle parts of projects, each building a part but in the whole proving the theory in a decentralized way. As compared to Soviet projects that went all-in.

A couple of paragraphs later, Lick goes on to state:

In pursuing the individual objectives, various members of the group will be preparing executive the monitoring routines, languages amd [sic.] compilers, debugging systems and documentation schemes, and substantive computer programs of more or less general usefulness. One of the purposes of the meeting–perhaps the main purpose–is to explore the possibilities for mutual advantage in these activities–to determine who is dependent upon whom for what and who may achieve a bonus benefit from which activities of what other members of the group. It will be necessary to take into account the costs as well as the values, of course. Nevertheless, it seems to me that it is much more likely to be advantageous than disadvantageous for each to see the others’ tentative plans before the plans are entirely crystalized. I do not mean to argue that everyone should abide by some rigid system of rules and constraints that might maximize, for example, program interchangeability.

Here, he’s acknowledging that stakeholders have different needs, goals and values, but stating that if everyone shared plans the outcome could be greater across the board. He goes on to further state that:

But, I do think that we should see the main parts of the several projected efforts, all on one blackboard, so that it will be more evident than it would otherwise be, where network-wide conventions would be helpful and where individual concessions to group advantage would be most important.

These days we prefer a whiteboard or maybe even a Miro board. But this act of visualization would let research from disparate fields, like Paul Baran at RAND working on packet switching at the time, be pulled in to think about how networks would look and work. While the government was providing money to different institutes the research organizations were autonomous and by having each node able to operate on their own rather than employ a centralized approach, the network could be built such that signals could travel along multiple paths in case one path broke down, thus getting at the heart of the matter - having a network that could survive a nuclear attach provided some link or links survived. 

He then goes on to state:

It is difficult to determine, of course, what constitutes “group advantage.” Even at the risk of confusing my own individual objectives (or ARPA’s) with those of the “group,” however, let me try to set forth some of the things that might be, in some sense, group or system or network desiderata.

This is important. In this paragraph he acknowledges his own motive, but sets up a value proposition for the readers. He then goes on to lay out a future that includes an organization like what we now use the IETF for in:

There will be programming languages, debugging languages, time-sharing system control languages, computer-network languages, data-base (or file-storage-and-retrieval languages), and perhaps other languages as well. It may or may not be a good idea to oppose or to constrain lightly the proliferation of such. However, there seems to me to be little question that it is desireable to foster “transfer of training” among these languages. One way in which transfer can be facilitated is to follow group consensus in the making of the arbitrary and nearly-arbitrary decisions that arise in the design and implementation of languages. There would be little point, for example, in having a diversity of symbols, one for each individual or one for each center, to designate “contents of” or “type the contents of.” 

The IETF and IEEE now manage the specifications that lay out the structure that controls protocols and hardware respectively. The early decisions made were for a small collection of nodes on the ARPANet and as the nodes grew and the industry matured, protocols began to be defined very specifically, such as DNS, covered in the what, second episode of this podcast. It’s important that Lick didn’t yet know what we didn’t know, but he knew that if things worked out that these governing bodies would need to emerge in order to keep splinter nets at a minimum. At the time though, they weren’t thinking much of network protocols. They were speaking of languages, but he then goes on to lay out a network-control language, which would emerge as protocols.

Is the network control language the same thing as the time-sharing control language? (If so, the implication is that there is a common time-sharing control language.) Is the network control language different from the time-sharing control language, and is the network-control language common to the several netted facilities? Is there no such thing as a network-control language? (Does one, for example, simply control his own computer in such a way as to connect it into whatever part of the already-operating net he likes, and then shift over to an appropriate mode?)

In the next few paragraphs he lays out a number of tasks that he’d like to accomplish - or at least that he can imagine others would like to accomplish, such as writing programs to run on computers, access files over the net, or read in teletypes remotely. And he lays out storing photographs on the internet and running applications remotely, much the way we do with microservices today. He referrs to information retrieval, searching for files based on metadata, natural language processing, accessing research from others, and bringing programs into a system from a remote repository, much as we do with cpan, python imports, and github today. 

Later, he looks at how permissions will be important on this new network:

here is the problem of protecting and updating public files. I do not want to use material from a file that is in the process of being changed by someone else. There may be, in our mutual activities, something approximately analogous to military security classification. If so, how will we handle it?

It turns out that the first security issues were because of eased restrictions on resources. Whether that was viruses, spam, or just accessing protected data. Keep in mind, the original network was to facilitate research during the cold war. Can’t just have commies accessing raw military research can we? As we near the end of the memo, he says:

The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence.

Again, it was meant to be a military network. It was meant to be resilient and withstand a nuclear attack. That had already been discussed in meetings before this memo. Here, he’s shooting questions to stakeholders. But consider the name of the memo, Memorandum For Members and Affiliates of the Intergalactic Computer Network. Not “A” network but “the” network. And not just any network, but THE Intergalactic Network. Sputnik had been launched in 1957. The next year we got NASA. 

Eisenhower then began the process that resulted in the creation of ARPA to do basic research so the US could leapfrog the Soviets. The Soviets had beaten the US to a satellite by using military rocketry to get to space. The US chose to use civilian rocketry and so set a standard that space (other than the ICBMs) would be outside the cold war. Well, ish. 

But here, we were mixing military and civilian research in the hallowed halls of universities. We were taking the best and brightest and putting them into the employ of the military without putting them under the control of the military. A relationship that worked well until the Mansfield Amendment to the 1970 Military Authorization Act ended the military funding of research that didn’t have a direct or apparent relationship to specific military function. What happened between when Lick started handing out grants to people he trusted and that act would change the course of the world and allow the US to do what the Soviets and other countries had been tinkering with, effectively develop a nationwide link of computers to provided for one of the biggest eras of collaborative research the world has ever seen. What the world wanted was an end to violence in Vietnam. What they got was a transfer of technology from the military industrial complex to corporate research centers like Xerox PARC, Digital Equipment Corporation, and others. 

Lick then goes on to wrap the memo up:

In conclusion, then, let me say again that I have the feeling we should discuss together at some length questions and problems in the set to which I have tried to point in the foregoing discussion. Perhaps I have not pointed to all the problems. Hopefully, the discussion may be a little less rambling than this effort that I am now completing.

The researchers would continue to meet. They would bring the first node of the ARPANET online in 1969. In that time they’d also help fund research such as the NLS, or oN-Line System. That eventually resulted in mainstreaming the graphical user interface and the mouse. Lick would found the Information Processing Techniques office and launch Project MAC, the first big, serious research into personal computing. They’d fund Transit, an important navigation system that ran until 1996 when it was replaced by GPS. They built Shakey the robot. And yes, they did a lot of basic military research as well. 

And today, modern networks are Intergalactic. A bunch of nerds did their time planning and designing and took UCLA online then Stanford, then UCSB and then a PDP10 at the University of Utah. Four nodes, four types of computers. Four operating systems. Leonard Kleinrock and the next generation would then take the torch and bring us into the modern era. But that story is another episode. Or a lot of other episodes. 

We don’t have a true Cold War today. We do have some pretty intense rhetoric. And we have a global pandemic. Kinda’ makes you wonder what basic research is being funded today and how that will shape the world in the next 57 years, the way this memo has shaped the world. Or given that there were programs in the Soviet Union and other countries to do something similar was it really a matter of technological determinism? Not to take anything away from the hard work put in at ARPA and abroad. But for me at least, the jury is still out on that. But I don’t have any doubt that the next wave of changes will be even more impactful. Crazy to think, right?



This prefix “cyber” is pretty common in our vernacular today. Actually it was in the 90s and now seems reserved mostly for governmental references. But that prefix has a rich history. We got cyborg in 1960 from Manfred Clynes and Nathan S. Kline. And X-Men issue 48 in 1968 introduced a race of robots called Cybertrons, likely the inspiration for the name of the planet the Transformers would inhabit as they morphed from the Japanese Microman and Diaclone toys.

We got cyberspace from William Gibson in 1982 and cyberpunk from the underground art scene in the 1980s. We got cybersex in the mid-90s with AOL. The term cybercrime rose to prominence in that same timeframe, being formalized in use by the G8 Lyons Group on High-Tech Crime. And we get cybercafes, cyberstalking, cyberattack, cyberanarchism, cyberporn, and even cyberphobia of all those sound kinda’ ick. 

And so today, the word cyber is used to prefix a meaning around the culture of computers, information technology, and virtual reality and the meaning is pretty instantly identifiable. But where did it come from? The word is actually short for cybernetic, which is greek for skilled in steering or governing. And 

Cybernetics is a multi-disciplinary science, or psuedo-science according to who you talk to, that studies systems. And it’s defined in its most truest form with the original 1948 definition from the author who pushed it into the mainstream, Norbert Wiener: “the scientific study of control and communication in the animal and the machine.” 

Aaaactually, let’s back up a minute. French physicist André-Marie Ampère coined the term cybernétique in 1934, which he called his attempt to classify human knowledge. His work on electricity and magnetism would result in studies that would earn him the honor of having the Amp named after him. But jump forward to World War Two and after huge strides in General Systems Theory and negative feedback loops and the amazing work done at Bell Labs, we started getting MIT’s Jay Forrester (who would invent computer memory) and Gordon Brown, who defined automatic-feedback control systems and solidified servomechanisms, or servos in engineering applying systems thinking all over the place, which also resulted in Forrester applying that thinking to Management, resulting in the MIT Sloan School of Management. And Deming applied these concepts to process, resulting in Total Quality Management which has been a heavy influence on what we call Six Sigma today. And John Boyd would apply systems thinking and feedback loops into military strategy. So a lot of people around the world were taking a deeper look at process and feedback and loops and systems in general. 

During World War II, systems thinking was on the rise. And seeing the rise of the computer, Norbert Wiener worked on anti-aircraft guns and was looking into what we now call information theory at about the same time Claude Shannon was. Whereas Claude Shannon went on to formalize Information Theory, Wiener formalized his work as cybernetics. He had published “A simplification in the logic of relations” in 1914, so he wasn’t new to this philosophy of melding systems and engineering. But things were moving quickly. ENIAC had gone live in 1947. Claud Shannon published a paper in 1948 that would emerge as a book called “A Mathematical Theory of Communication” by 1949. So Wiener published his book called Cybernetics, or the Control and Communication in the Animal and the Machine, in 1948.  And Donald Mackay was releasing his book on Multiplication and division by electronic analogue methods in 1948 in England. Turing’s now infamous work during World War II had helped turn the tides and after the war he was working on the Automatic Computing Engine. John von Neumann had gone from developing game theory to working on the Manhattan Project and nuclear bombs and working with ENIAC to working on computing at Princeton and starting to theorize on cellular automata. J.C.R. Licklider was just discovering the computer while working on psychoacoustics research at Harvard - work that would propel him to become the Johnny Appleseed of computing and the instigator at the center of what we now call the Internet and personal computers.

Why am I mentioning so many of the great early thinkers in computing? Because while Wiener codified, he was not alone responsible for Cybernetics. In fact, the very name Cybernetics had been the name of a set of conferences held from 1946 to 1953 and organized by the Josiah Macy, Jr foundation. These conferences and that foundation are far more influential in Western computing in the 50s and 60s, and the principals that sprang from that and went around the world than credit is usually given.

All of those people mentioned and dozens of others who are responsible for so many massive, massive discoveries were at those conferences and in the clubs around the world that sprang up from their alumni. They were looking for polymaths who could connect dots and deep thinkers in specialized fields to bring science forward through an interdisciplinary lens. In short, we had gone beyond a time when a given polymath could exceed at various aspects of the physical sciences and into a world where we needed brilliant specialists connected with those polymaths to gain quantum leaps in one discipline, effectively from another. 

And so Wiener took his own research and sprinkled in bits from others and formalized Cybernetics in his groundbreaking book. From there, nearly every discipline integrated the concept of feedback loops. Plato, who the concept can be traced back to, would have been proud. And from there, the influence was massive. 

The Cold War Military-Industrial-University complex was coming into focus. Paul Baran from RAND would read McCullough and Pitts’ work from Cybernetcs and neural nets and use that as inspiration for packet switching. That work and the work of many others in the field is now the basis for how computers communicate with one another. The Soviets, beginning with Glushkov, would hide Cybernetics and dig it up from time to time restarting projects to network their cities and automate the command and control economy. Second order cybernetics would emerge to address observing systems and third order cybernetics would emerge as applied cybernetics from the first and second order. We would get system dynamics, behavioral psychology, cognitive psychology, organizational theory, neuropsychology, and the list goes on. 

The book would go into a second edition in 1965. While at MIT, Wiener was also influential in early theories around robotics and automation. Applied cybernetics. But at the Dartmouth workshop in 1956, John McCarthy along with Marvin Minsky and Claude Shannon would effectively split the field into what they called artificial intelligence. The book Emergence is an excellent look at applying the philosophies to ant colonies and analogizing what human enterprises can extract from that work. Robotics is made possible by self-correcting mechanisms in the same way learning organizations and self-organization factor in. Cybernetics led to control theory, dynamic systems, and even chaos theory. We’ve even grown to bring biocybernetics into ecology, and synthetic and systems biology. Engineering and even management. 

The social sciences have been heavily inspired by cybernetics. Attachment theory, the cognitive sciences, and psychovector analysis are areas where psychology took inspiration. Sociology, architecture, law. The list goes on. 

And still, we use the term artificial intelligence a lot today. This is because we are more focused on productivity gains and the truths the hard sciences can tell us with statistical modeling than with the feedback loops and hard study we can apply to correcting systems. I tend to think this is related to what we might call “trusting our guts.” Or just moving so fast that it’s easier to apply a simplistic formula to an array to find a k-nearest neighbor than it is to truly analyze patterns and build feedback loops into our systems. It’s easier to do things because “that’s the way we’ve always done that” than to set our ego to the side and look for more efficient ways. That is, until any engineer on a production line at a Toyota factory can shut the whole thing down due to a defect. But even then it’s easier to apply principles from lean manufacturing than to truly look at our own processes, even if we think we’re doing so by implementing the findings from another. 

I guess no one ever said organizational theory was easy. And so whether it’s the impact to the Internet, the revolutions inspired in applied and sciences, or just that Six Sigma Blackbelt we think we know, we owe Wiener and all of the others involved in the early and later days of Cybernetics a huge thank you. The philosophies they espoused truly changed the world. 

And so think about this. The philosophies of Adam Smith were fundamental to a new world order in economics. At least, until Marx inspired Communism and the Great Depression inspired English economist John Maynard Keynes to give us Keynesian economics. Which is still applied to some degree, although one could argue incorrectly with Stimulus checks when compared to the New Deal. Necessity is the mother of invention. So what are the new philosophies emerging from the hallowed halls of academia? Or from the rest of the world at large? What comes after Cybernetics and Artificial Intelligence? Is a tough economy when we would expect the next round of innovative philosophy that could then be applied to achieve the same kinds of productivity gains we got out of the digitization of the world? Who knows. But I’m an optimist that we can get inspired - or I wouldn’t have asked. 

Thank you for tuning in to this episode of the history of computing podcast. We are so lucky to have you. Have a great day. 

PGP and the First Amendment


I was giving a talk at DefCon one year and this guy starts grilling me at the end of the talk about the techniques Apple was using to encrypt home directories at the time with new technology called Filevault. It went on a bit, so I did that thing you sometimes have to do when it’s time to get off stage and told him we’d chat after. And of course he came up - and I realized he was really getting at the mechanism used to decrypt and the black box around decryption. He knew way more than I did about encryption so I asked him who he was. When he told me, I was stunned.

Turns out that like me, he enjoyed listening to A Prairie Home Companion. And on that show, Garrison Keillor would occasionally talk about Ralph’s Pretty Good Grocery in a typical Minnesota hometown he’d made up for himself called Lake Wobegon. Zimmerman liked the name and so called his new encryption tool PGP, short for Pretty Good Privacy. It was originally written to encrypt messages being sent to bulletin boards. 

That original tool didn’t require any special license, provided it wasn’t being used commercially. And today, much to the chagrin of the US government at the time, it’s been used all over the world to encrypt emails, text files, text messages, directories, and even disks. But we’ll get to that in a bit. 

Zimmerman had worked for the Nuclear Weapons Freeze Campaign in the 80s after getting a degree in computer science fro Florida Atlantic University in 1978. And after seeing the government infiltrate organizations organizing Vietnam protests, he wanted to protect the increasingly electronic communications of anti-nuclear protests and activities. 

The world was just beginning to wake up to a globally connected Internet. And the ARPAnet had originally been established by the military industrial complex, so it was understandable that he’d want to keep messages private that just happened to be flowing over a communications medium that many in the defense industry knew well. So he started developing his own encryption algorithm called BassOmatic in 1988. That cipher used symmetric keys with control bits and pseudorandom number generation as a seed - resulting in 8 permutation tables. He named BassOmatic after a Saturday Night Live skit. I like him more and more. 

He’d replace BassOmatic with IDEA in version 2 in 1992. And thus began the web of trust, which survives to this day in PGP, OpenPGP, and GnuPG. Here, a message is considered authentic based on it being bound to a public key - one that is issued in a decentralized model where a certificate authority issues a public and private key where messages can only be encrypted or signed with the private key and back then you would show your ID to someone at a key signing event or party in order to get a key. Public keys could then be used to check that the individual you thought was the signer really is. Once verified then a separate key could be used to encrypt messages between the parties. 

But by then, there was a problem. The US government began a criminal investigation against Zimmerman in 1993. You see, the encryption used in PGP was too good. Anything over a 40 bit encryption key was subject to US export regulations as a munition. Remember, the Cold War. Because PGP used 128 bit keys at a minimum. So Zimmerman did something that the government wasn’t expecting. Something that would make him a legend. He went to MIT Press and published the PGP source code in a physical book. Now, you could OCR the software, run it through a compiler. Suddenly, his code was protected as an exportable book by the First Amendment. 

The government dropped the investigation and found something better to do with their time. And from then on, source code for cryptographic software became an enabler of free speech, which has been held up repeatedly in the appellate courts. So 1996 comes along and PGP 3 is finally available. This is when Zimmerman founds PGP as a company so they could focus on PGP full-time. Due to a merger with Viacrypt they jumped to PGP 5 in 1997. 

Towards the end of 1997 Network Associates acquired PGP and they expanded to add things like intrusion detection, full disk encryption, and even firewalls. Under Network Associates they stopped publishing their source code and Zimmerman left in 2001. Network Associates couldn’t really find the right paradigm and so merged some products together and what was PGP commandline ended up becoming McAfee E-Business Server in 2013. 

But by 2002 PGP Corporation was born out of a few employees securing funding from Rob Theis to help start the company and buy the rest of the PGP assets from Network Associates. They managed to grow it enough to sell it for $300 million to Symantec and PGP lives on to this day. 

But I never felt like they were in it just for the money. The money came from a centralized policy server that could do things like escrow keys. But for that core feature of encrypting emails and later disks, I really always felt like they wanted a lot of that free. And you can buy Symantec Encryption Desktop and command it from a server, S/MIME and OpenPGP live on in ways that real humans can encrypt their communications, some of which in areas where their messages might get them thrown in jail.

By the mid-90s, mail wasn’t just about the text in a message. It was more. RFC934 in 1985 had started the idea of encapsulating messages so you could get metadata. RFC 1521 in 1993 formalized MIME and by 1996, MIME was getting really mature in RFC2045. But by 1999 we wanted more and so S/MIME went out as RFC 2633. Here, we could use CMS to “cryptographically enhance” a MIME body. In other words, we could suddenly encrypt more than the text of an email and it since it was an accepted internet standard, it could be encrypted and decrypted with standard mail clients rather than just with a PGP client that didn’t have all the bells and whistles of pretty email clients. 

That included signing information, which by 2004 would evolve to include attributes for things like singingTime, SMIMECapabilities, algorithms and more. 

Today, iOS can use S/MIME and keys can be stored in Exchange or Office 365 and that’s compatible with any other mail client that has S/MIME support, making it easier than ever to get certificates, sign messages, and encrypt messages. Much of what PGP was meant for is also available in OpenPGP. OpenPGP is defined by the OpenPGP Working Group and you can see the names of some of these guardians of privacy in RFC 4880 from 2007. Names like J. Callas, L. Donnerhacke, H. Finney, D. Shaw, and R. Thayer. Despite the corporate acquisitions, the money, the reprioritization of projects, these people saw fit to put powerful encryption into the hands of real humans and once that pandoras box had been opened and the first amendment was protecting that encryption as free speech, to keep it that way. Use Apple Mail, GPGTools puts all of this in your hands. Use Android, get FairEmail. Use Windows, grab EverDesk. 

This specific entry felt a little timely. Occasionally I hear senators tell companies they need to leave backdoors in products so the government can decrypt messages. And a terrorist forces us to rethink that basic idea of whether software that enables encryption is protected by freedom of speech. Or we choose to attempt to ban a company like WeChat, testing whether foreign entities who publish encryption software are also protected. Especially when you consider whether Tencent is harvesting user data or if the idea they are doing that is propaganda. For now, US courts have halted a ban on WeChat. Whether it lasts is one of the more intriguing things I’m personally watching these days, despite whatever partisan rhetoric gets spewed from either side of the isle, simply for the refinement to the legal interpretation that to me began back in 1993. After over 25 years we still continue to evolve our understanding of what truly open and peer reviewed cryptography being in the hands of all of us actually means to society. 

The inspiration for this episode was a debate I got into about whether the framers of the US Constitution would have considered encryption, especially in the form of open source public and private key encryption, to be free speech. And it’s worth mentioning that Washington, Franklin, Hamilton, Adams, and Madison all used  ciphers to keep their communications private. And for good reason as they knew what could happen should their communications be leaked, given that Franklin had actually leaked private communications when he was the postmaster general. Jefferson even developed his own wheel cipher, which was similar to the one the US army used in 1922. It comes down to privacy. The Constitution does not specifically call out privacy; however, the first Amendment guarantees the privacy of belief, the third, the privacy of home, the fourth, privacy against unreasonable search and the fifth, privacy of of personal information in the form of the privilege against self-incrimination. And giving away a private key is potentially self-incrimination. Further, the ninth Amendment has broadly been defined as the protection of privacy. 

So yes, it is safe to assume they would have supported the transmission of encrypted information and therefore the cipher used to encrypt to be a freedom. Arguably the contents of our phones are synonymous with the contents of our homes though - and if you can have a warrant for one, you could have a warrant for both. Difference is you have to physically come to my home to search it - whereas a foreign government with the same keys might be able to decrypt other data. Potentially without someone knowing what happened. The Electronic Communications Privacy Act of 1986 helped with protections but with more and more data residing in the cloud - or as with our mobile devices synchronized with the cloud, and with the intermingling of potentially harmful data about people around the globe potentially residing (or potentially being analyzed) by people in countries that might not share the same ethics, it’s becoming increasingly difficult to know what is the difference between keeping our information private, which the framers would likely have supported and keeping people safe. Jurisprudence has never kept up with the speed of technological progress, but I’m pretty sure that Jefferson would have liked to have shared a glass of his favorite drink, wine, with Zimmerman. Just as I’m pretty sure I’d like to share a glass of wine with either of them. At Defcon or elsewhere!

1996: A Declaration of the Independence of Cyberspace


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover a paper by one of the more colorful characters in the history of computing. 

John Perry Barlow wrote songs for the Grateful Dead, ran a cattle ranch, was a founder of the Electronic Frontier Foundation, was a founder of the Freedom of the Press Foundation, was a fellow emeritus at Harvard, and early Internet pioneer. 

A bit more of the old-school libertarian, he believed the Internet should be free. And to this end, he published an incredibly influential paper in Davos, Switzerland in 1996. That paper did as much during the foundational years of the still-nascent Internet as anything else. And so here it is. 


A Declaration of the Independence of Cyberspace

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.

You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces. You do not know our culture, our ethics, or the unwritten codes that already provide our society more order than could be obtained by any of your impositions.

You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don't exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different.

Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.

Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge. Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose.

In the United States, you have today created a law, the Telecommunications Reform Act, which repudiates your own Constitution and insults the dreams of Jefferson, Washington, Mill, Madison, DeToqueville, and Brandeis. These dreams must now be born anew in us.

You are terrified of your own children, since they are natives in a world where you will always be immigrants. Because you fear them, you entrust your bureaucracies with the parental responsibilities you are too cowardly to confront yourselves. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.

In China, Germany, France, Russia, Singapore, Italy and the United States, you are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace. These may keep out the contagion for a small time, but they will not work in a world that will soon be blanketed in bit-bearing media.

Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish.

These increasingly hostile and colonial measures place us in the same position as those previous lovers of freedom and self-determination who had to reject the authorities of distant, uninformed powers. We must declare our virtual selves immune to your sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts.

We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.


Thank you to John Perry Barlow for helping keep the Internet as de-regulated as it can be. Today, as we are overwhelmed by incorrect tweets (no matter what side of the politically isle you fall on), disinformation, and political manipulation, we have to rethink this foundational concept. And I hope we keep coming back to the same realization - the government has no sovereignty where we gather. 

Thank you for tuning in to this episode of the history of computing podcast. We are so, so lucky to have you. Have a great day. 

Claude Shannon and the Origins of Information Theory


The name Claude Shannon has come up 8 times so far in this podcast. More than any single person. We covered George Boole and the concept that Boolean is a 0 and a 1 and that using Boolean algebra, you can abstract simple circuits into practically any higher level concept. And Boolean algebra had been used by a number of mathematicians, to perform some complex tasks. Including by Lewis Carroll in Through The Looking Glass to make words into math. 

And binary had effectively been used in morse code to enable communications over the telegraph. 

But it was Claude Shannon who laid the foundation for making a theory that took both the concept of communicating over the telegraph and applying Boolean algebra to get to a higher level of communication possible. And it all starts with bits, which we can thank Shannon for. 

Shannon grew up in Gaylord, Michigan. His mother was a high school principal and his grandfather had been an inventor. He built a telegraph as a child, using a barbed wire fence. But barbed wire isn’t the greatest conducer of electricity and so… noise. And thus information theory began to ruminate in his mind. He went off to the University of Michigan and got a Bachelors in electrical engineering and another in math. A perfect combination for laying the foundation of the future. 

And he got a job as a research assistant to Vannevar Bash, who wrote the seminal paper, As We May Think. At that time, Bush was working at MIT on The Thinking Machine, or Differential Analyzer. This was before World War II and they had no idea, but their work was about to reshape everything.  At the time, what we think of as computers today, were electro-mechanical. They had gears that were used for the more complicated tasks, and switches, used for simpler tasks. 

Shannon devoted his masters thesis to applying Boolean algebra, thus getting rid of the wheels, which moved slowly, and allowing the computer to go much faster. He broke down Boole’s Laws of Thought into a manner it could be applied to parallel circuitry. That paper was called A Symbolic Analysis of Relay and Switching Circuits in 1937 and helped set the stage for the Hackers revolution that came shortly thereafter at MIT. 

At the urging of Vannevar Bush, he got his PhD in Biology, pushing genetics forward by theorizing that you could break the genetic code down into a matrix. The structure of DNA would be discovered by George Gamow in 1953 and Watson and Crick would discover the helix and Rosalind Franklin would use X-ray crystallography to capture the first photo of the structure. 

He headed off to Princeton in 1940 to work at the Institute for Advanced Study, where Einstein and von Neumann were. He quickly moved over to the National Defense Research Committee, as the world was moving towards World War II. A lot of computing was going into making projectiles, or bombs, more accurate. He co-wrote a paper called Data Smoothing and Prediction in Fire-Control Systems during the war. 

He’d gotten a primer in early cryptography, reading The Gold-Bug by Edgar Allan Poe as a kid. And it struck his fancy. So he started working on theories around cryptography, everything he’d learned forming into a single theory. He would have lunch with Alan Turning during the war. He would And it was around this work that he first coined the term “information theory” in 1945.

A universal theory of communication gnawed at him and formed during this time, from the Institute, to the National Defense Research Committee, to Bell Labs, where he helped encrypt communications between world leaders. He hid it from everyone, including failed relationships. He broke information down into the smallest possible unit, a bit, short for a binary digit. He worked out how to compress information that was most repetitive. Similar to how morse code compressed the number of taps on the electrical wire by making the most common letters the shortest to send. Eliminating redundant communications established what we now call compression. 

Today we use the term lossless compression frequently in computing. He worked out that the minimum amount of information to send would be H = - Sigma Pi log2 Pi - or entropy. 

His paper, put out while he was at Bell, was called “A mathematical theory or communication” and came out in 1948. You could now change any data to a zero or a one and then compress it. Further, he had to find a way to calculate the maximum amount of information that could be sent over a communication channel before it became garbled, due to loss. We now call this the Shannon Limit. And so once we have that, he derived how to analyze information with math to correct for noise. That barbed wire fence could finally be useful. This would be used in all modern information connectivity. For example, when I took my Network+ we spent an inordinate amount of time learning about Carrier-sense multiple access with collision detection (CSMA/CD) - a media access control (MAC) method that used carrier-sensing to defer transmissions until no other stations are transmitting.

And as his employer, Bell Labs helped shape the future of computing. Along with Unix, C, C++, the transistor, the laser, information theory is a less tangible yet given what we all have in our pockets on on our wrists these days, more tangible discovery. Having mapped the limits, Bell started looking to reach the limit. And so the digital communication age was born when the first modem would come out of his former employer, Bell Labs, in 1958. And just across the way in Boston, ARPA would begin working on the first Interface Message Processor in 1967, the humble beginnings of the Internet.

His work done, he went back to MIT. His theories were applied to all sorts of disciplines. But he comes in less and less. Over time we started placing bits on devices. We started retrieving those bits. We started compressing data. Digital images, audio, and more. It would take 35 or so years 

He consulted with the NSA on cryptography. In 1949 he published Communication Theory of Secrecy Systems,  pushed cryptography to the next level. His paper Prediction and Entropy of Printed English in 1951 practically created the field of natural language processing, which evolved into various branches of machine learning. He helped give us the Nyquist–Shannon sampling theorem, used in aliasing, deriving maximum throughput, RGB, and of course signal to noise. 

He loved games. In 1941 he theorized the Shannon Number, or the game-tree complexity of chess. In case you’re curious, the reason deep blue can win at chess is that it can brute force 10 to the 120th power. His love of games continued and in 1949 he presented Programming a Computer for Playing Chess. That was the first time we thought about computers playing chess. And he’d have a standing bet that a computer would beat a human grand master at chess by 2001. Garry Kasparov lost to Deep Blue in 1997.

That curiosity extended far beyond chess. He would make Theseus in 1950 - a maze with a mouse that learned how to escape, using relays from phone switches. One of the earliest forms of machine learning. In 1961 he would co-invent the first wearable computer to help win a game of roulette. That same year he designed the Minivan 601 to help teach how computers worked. 

So we’ll leave you with one last bit of information. Shannon’s maxim is that “the enemy knows the system.” I used to think it was just a shortened version of Kerckhoffs's principle, which is that it should be possible to understand a cryptographic system, for example, modern public key ciphers, but not be able to break the encryption without a private key. Thing is, the more I know about Shannon the more I suspect that what he was really doing was giving the principle a broader meaning. So think about that as you try and decipher what is and what is not disinformation in such a noisy world. 

Lots and lots of people would cary on the great work in information theory. Like Kullback–Leibler divergence, or relative entropy. And we owe them all our thanks. But here’s the thing about Shannon: math. He took things that could have easily been theorized - and he proved them. Because science can refute disinformation. If you let it. 

A Retrospective On Google, On Their 22nd Birthday


We are in strange and uncertain times. The technology industry has always managed to respond to strange and uncertain times with incredible innovations that lead to the next round of growth. Growth that often comes with much higher rewards and leaves the world in a state almost unimaginable in previous iterations. The last major inflection point for the Internet, and computing in general, was when the dot come bubble burst. 

The companies that survived that time in the history of computing and stayed true to their course sparked the Web 2.0 revolution. And their shareholders were rewarded by going from exits and valuations in the millions in the dot com era, they went into the billions in the Web 2.0 era. None as iconic as Google. They finally solved how to make money at scale on the Internet and in the process validated that search was a place to do so.

Today we can think of Google, or the resulting parent Alphabet, as a multi-headed hydra. The biggest of those heads includes Search, which includes AdWords and AdSense. But Google has long since stopped being a one-trick pony. They also include Google Apps, Google Cloud, Gmail, YouTube, Google Nest, Verily, self-driving cars, mobile operating systems, and one of the more ambitious, Google Fiber. But how did two kids going to Stanford manage to become the third US company to be valued at a trillion dollars?

Let’s go back to 1998. The Big Lebowski, Fear and Loathing in Las Vegas, There’s Something About Mary, The Truman Show, and Saving Private Ryan were in the theaters. Puff Daddy hadn’t transmogrified into P Diddy. And Usher had three songs in the Top 40. Boyz II Men, Backstreet Boys, Shania Twain, and Third Eye Blind couldn’t be avoided on the airwaves. They’re now pretty much relegated to 90s disco nights. But technology offered a bright spot. We got the first MP3 player, the Apple Newton, the Intel Celeron and Xeon, the Apple iMac, MySQL, v.90 Modems, StarCraft, and two Stanford students named Larry Page and Sergey Brin took a research project they started in 1996 with Scott Hassan, and started a company called Google (although Hassan would leave Google before it became a company). 

There were search engines before Page and Brin. But most produced search results that just weren’t that great. In fact, most were focused on becoming portals. They took their queue from AOL and other ISPs who had springboarded people onto the web from services that had been walled gardens. As they became interconnected into a truly open Internet, the amount of diverse content began to explode and people just getting online found it hard to actually find things they were interested in. Going from ISPs who had portals to getting on the Internet, many began using a starting page like Archie, LYCOS, Jughead, Veronica, Infoseek, and of course Yahoo!

Yahoo! Had grown fast out of Stanford, having been founded by Jerry Yang and David Filo. By 1998, the Yahoo! Page was full of text. Stock tickers, links to shopping, and even horoscopes. It took a lot of the features from the community builders at AOL. The model to take money was banner ads and that meant keeping people on their pages. Because it wasn’t yet monetized and in fact acted against the banner loading business model, searching for what you really wanted to find on the Internet didn’t get a lot of love. The search engines or portals of the day had pretty crappy search engines compared to what Page and Brin were building. 

They initially called the search engine BackRub back in 1996. As academics (and the children of academics) they knew that the more papers that sited another paper, the more valuable the paper was. Applying that same logic allowed them to rank websites based on how many other sites linked into it. This became the foundation of the original PageRank algorithm, which continues to evolve today. The name BackRub came from the concept of weighting based on back links. That concept had come from a tool called RankDex, which was developed by Robin Li who went on to found Baidu. 

Keep in mind, it started as a research project. The transition from research project meant finding a good name. Being math nerds they landed on "Google" a play on "googol", or a 1 followed by a hundred zeros.

And within a year they were still running off University of Stanford computers. As their crawlers searched the web they needed more and more computing time. So they went out looking for funding and in 1998 got $100,000 from Sun Microsystems cofounder Andy Bechtolsheim. Jeff Bezos from Amazon, David Cheriton, Ram Shriram and others kicked in some money as well and they got a million dollar round of angel investment. And their algorithm kept getting more and more mature as they were able to catalog more and more sites. By 1999 they went out and raised $25 million from Kleiner Perkins and Sequoia Capital, insisting the two invest equally, which hadn’t been done. 

They were frugal with their money, which allowed them to weather the coming storm when the dot com bubble burst. They build computers to process data using off the shelf hardware they got at Fry’s and other computer stores, they brought in some of the best talent in the area as other companies were going bankrupt. 

They also used that money to move into offices in Palo Alto and in 2000 started selling ads through a service they called AdWords. It was a simple site and ads were text instead of the banners popular at the time. It was an instant success and I remember being drawn to it after years of looking at that increasingly complicated Yahoo! Landing page. And they successfully inked a deal with Yahoo! to provide organic and paid search, betting the company that they could make lots of money. And they were right. The world was ready for simple interfaces that provided relevant results. And the results were relevant for advertisers who could move to a pay-per-click model and bid on how much they wanted to pay for each click. They could serve ads for nearly any company and with little human interaction because they spent the time and money to build great AI to power the system. You put in a credit card number and they got accurate projections on how successful an ad would be. In fact, ads that were relevant often charged less for clicks than those that weren’t. And it quickly became apparent that they were just printing money on the back of the new ad system.

They brought in Eric Schmidt to run the company, per the agreement they made when they raised the $25 million and by 2002 they were booking $400M in revenue. And they operated at a 60% margin. These are crazy numbers and enabled them to continue aggressively making investments. The dot com bubble may have burst, but Google was a clear beacon of light that the Internet wasn’t done for.

In 2003 Google moved into a space now referred to as the Googleplex, in Mountain View California. In a sign of the times, that was land formerly owned by Silicon Graphics. They saw how the ad model could improved beyond paid placement and banners and acquired  is when they launched AdSense. They could afford to with $1.5 billion in revenue. 

Google went public in 2004, with revenues of $3.2 billion. Underwritten by Morgan Stanley and Credit Suisse, who took half the standard fees for leading the IPO, Google sold nearly 20 million shares. By then they were basically printing money. By then the company had a market cap of $23 billion, just below that of Yahoo. That’s the year they acquired Where 2 Technologies to convert their mapping technology into Google Maps, which was launched in 2005. They also bought Keyhole in 2004, which the CIA had invested in, and that was released as Google Earth in 2005. That technology then became critical for turn by turn directions and the directions were enriched using another 2004 acquisition, ZipDash, to get real-time traffic information. At this point, Google wasn’t just responding to queries about content on the web, but were able to respond to queries about the world at large. They also released Gmail and Google Books in 2004.

By the end of 2005 they were up to $6.1 billion in revenue and they continued to invest money back into the company aggressively, looking not only to point users to pages but get into content. That’s when they bought Android in 2005, allowing them to answer queries using their own mobile operating system rather than just on the web. On the back of $10.6 billion in revenue they bought YouTube in 2006 for $1.65 billion in Google stock. This is also when they brought Gmail into Google Apps for Your Domain, now simply known as G Suite - and when they acquired Upstartle to get what we now call Google Docs. 

At $16.6 billion in revenues, they bought DoubleClick in 2007 for $3.1 billion to get the relationships DoubleClick had with the ad agencies. 

They also acquired Tonic Systems in 2007, which would become Google Slides. Thus completing a suite of apps that could compete with Microsoft Office. By then they were at $16.6 billion in revenues.

The first Android release came in 2008 on the back of $21.8 billion revenue. They also released Chrome that year, a project that came out of hiring a number of Mozilla Firefox developers, even after Eric Schmidt had stonewalled doing so for six years. The project had been managed by up and coming Sundar Pichai. That year they also released Google App Engine, to compete with Amazon’s EC2. 

They bought On2, reCAPTCHA, AdMob, VOIP company Gizmo5, Teracent, and AppJet in 2009 on $23.7 Billion in revenue and Aardvark, reMail, Picnic, DocVerse, Episodic, Plink, Agnilux, LabPixies, BumpTop, Global IP Solutions, Simplify Media,, Invite Media, Metaweb, Zetawire, Instantiations,, Jambool,, Angstro, SocialDeck, QuickSee, Plannr, BlindType, Phonetic Arts, and Widevine Technologies in 2010 on 29.3 billion in revenue.

In 2011, Google bought Motorola Mobility for $12.5 billion to get access to patents for mobile phones, along with another almost two dozen companies. This was on the back of nearly $38 billion in revenue. 

The battle with Apple intensified when Apple removed Google Maps from iOS 6 in 2012. But on $50 billion in revenue, Google wasn’t worried. They released the Chromebook in 2012 as well as announcing Google Fiber to be rolled out in Kansas City. 

They launched Google Drive They bought Waze for just shy of a billion dollars in 2013 to get crowdsourced data that could help bolster what Google Maps was doing. That was on 55 and a half billion in revenue. 

In 2014, at $65 billion in revenue, they bought Nest, getting thermostats and cameras in the portfolio. 

Pichai, who had worked in product on Drive, Gmail, Maps, and Chromebook took over Android and by 2015 was named the next CEO of Google when Google restructured with Alphabet being created as the parent of the various companies that made up the portfolio. By then they were up to 74 and a half billion in revenue. And they needed a new structure, given the size and scale of what they were doing. 

In 2016 they launched Google Home, which has now brought AI into 52 million homes. They also bought nearly 20 other companies that year, including Apigee, to get an API management platform. By then they were up to nearly $90 billion in revenue.

2017 saw revenues rise to $110 billion and 2018 saw them reach $136 billion. 

In 2019, Pichai became the CEO of Alphabet, now presiding over a company with over $160 billion in revenues. One that has bought over 200 companies and employs over 123,000 humans. Google’s mission is “to organize the world's information and make it universally accessible and useful” and it’s easy to connect most of the acquisitions with that goal.

I have a lot of friends in and out of IT that think Google is evil. Despite their desire not to do evil, any organization that grows at such a mind-boggling pace is bound to rub people wrong here and there. I’ve always gladly using their free services even knowing that when you aren’t paying for a product, you are the product. We have a lot to be thankful of Google for on this birthday. As Netscape was the symbol of the dot com era, they were the symbol of Web 2.0. They took the mantle for free mail from Hotmail after Microsoft screwed the pooch with that. 

They applied math to everything, revolutionizing marketing and helping people connect with information they were most interested in. They cobbled together a mapping solution and changed the way we navigate through cities. They made Google Apps and evolved the way we use documents, making us more collaborative and forcing the competition, namely Microsoft Office to adapt as well. They dominated the mobility market, capturing over 90% of devices. They innovated cloud stacks. And here’s the crazy thing, from the beginning, they didn’t make up a lot. They borrowed the foundational principals of that original algorithm from RankDex, Gmail was a new and innovative approach to Hotmail, Google Maps was a better Encarta, their cloud offerings were structured similar to those of Amazon. And the list of acquisitions that helped them get patents or talent or ideas to launch innovative services is just astounding. 

Chances are that today you do something that touches on Google. Whether it’s the original search, controlling the lights in your house with Nest, using a web service hosted in their cloud, sending or receiving email through Gmail or one of the other hundreds of services. The team at Google has left an impact on each of the types of services they enable. They have innovated business and reaped the rewards. And on their 22nd birthday, we all owe them a certain level of thanks for everything they’ve given us.

So until next time, think about all the services you interact with. And think about how you can improve on them. And thank you, for tuning in to this episode of the history of computing podcast. 

The Oregon Trail


The Oregon Trail is a 2100 plus mile wagon route that stretched from the Missouri River to settleable lands in Oregon. Along the way it cuts through Kansas, Nebraska, Wyoming, and Idaho as well. After parts were charted by Lewis and Clark from 1804 to 1806, it was begun by fur traders in 1811 but fin the 1830s Americans began to journey across the trail to settle the wild lands of the Pacific Northwest. And today, Interstates 80 and 84 follow parts of it. But the game is about the grueling journey that people made from 1824 and on, which saw streams of wagons flow over the route in the 1840s. And over the next hundred years it became a thing talked about in textbooks but difficult to relate to in a land of increasing abundance. 

So flash forward to 1971. America is a very different place than those wagonloads of humans would have encountered in Fort Boise or on the Boeman Trail, both of which now have large cities named after them. Instead, in 1971, NPR produced their first broadcast. Amtrak was created in the US. Greenpeace was founded. Fred Smith created Federal Express. A Clockwork Orange was released. And Don Rawitch wrote The Oregon Trail while he was a senior at Carleton College to help teach an 8th grade history class in Northfield, Minnesota. 

It’s hard to imagine these days, but this game was cutting edge at the time. Another event in 1971: the Intel 4004 microprocessor comes along, which will change everything in computing in just 10 short years. In 1971, when Apollo 14 landed on the moon, the computer was made of hand-crafted coils and chips and a 10 key pad was used to punch in code. When Ray Tomlinson invented email that year, computers weren’t interactive. When IBM invented the floppy disk that year, no one would have guessed they would some day be used to give school children dissentary all across the world.

When he first wrote OREGON, as the game was originally known, Don was using a time shared HP 2100 minicomputer at Pillsbury (yes, the Pillsbury of doughboy fame who makes those lovely, flaky biscuits). THE HP WAS running Time-Share BASIC and Don roped in his roommates, Paul Dillenberger and Bill Heinemann to help out. Back then, the computer wrote output to teletype and took data in using tape terminals. But the kids loved it. They would take a wagon from Independence, Missouri to Willamette Valley, Oregon - making a grueling journey in a covered wagon in 1848. And they might die of dissentary, starvation, mountain fever or any other ailment Rawitch could think of. 

Gaming on paper tape was awkward, but the kids were inspired. They learned about computers and the history of how the West was settled at the same time. When the class was over, Don printed the code for the game, probably not thinking much would happen with it after that.

But then he got hired by the Minnesota Educational Computing Consortium, or MECC, in 1974. Back in the 60s and 70s, Minnesota was a huge hub of computing. Snow White and the Seven Dwarves had offices in the state, and early pioneers of mainframes like Honeywell, Unisys, ERA (and so Control Data Corporation and Cray from there), and IBM, all did a lot of work in the state. The state had funded MECC to build educational software for classrooms following the successes at TIES, or the Total Information for Educational Systems which had brought a time-sharing service on a HP 2000 along with training, and software (which they still do) to Minnesota schools. From there, the state created MECC to create software for schools. 

Don dug that code from 1971 back up and typed it back into the time sharing computers at MECC. He tweaked it a little and made it available on the CDC Cyber 70 at MECC and before you knew it, thousands of people were playing his game. By 1978 he’d publish the source code in Creative Computing magazine as the Oregon Trail. And then JP O’Malley would modify the basic programming to run on an Apple II and the Apple Pugetsound Program Library Exchange would post the game on their user group. 

The Oregon Trail 2 would come along that year as well and by 1980, MECC would release it along with better graphics as a part of an Elementary Series of educational titles - but the graphics got better with a full release as a standalone game in 1985. Along the way it had gotten ported for the Atari in 1983 and the Commodore 64 in 1984. But the 1985 version is the one we played in my school.

We loved getting to play on the computers in school. The teachers seemed to mostly love getting a break as we were all silent while playing, until we lost one of our party - and then we’d laugh and squeal at the same time! We’d buy oxen, an extra yoke for our wagon, food, bullets, and then we’d set off on our journey to places many of us had never heard of. We’d get diseases, break limbs, get robbed, and watch early versions of cut scenes in 8-bit graphics. And along the way, we learned.  

We learned about a city called Independence, Missouri. And that life was very different in 1848. We learned about history. We learned about game mechanics. We started with $800. 

We learned about bartering and how carpenters were better at fixing wagon wheels than bankers were. We tried to keep our party alive and we learned that it’s a good idea to save a little money to ferry across rivers. We learned the rudimentariness of shooting in games, as we tried to kill a bear here and there. We learned that rabbits didn’t give us much meat. We learned to type BANG and WHAM fast so we could shoot animals and later we learned to aim with arrow keys and fire with a space bar. The bison moved slow and gave more meat than the 100 pounds you could carry back to your wagon. So we shot them. We learned carpenters could fix wheels and to conserve enough money to ferry your wagon so you didn’t sink or have one of your party drown. 

We learned that you got double the points for playing the carpenter and triple for playing the farmer. We wanted to keep our family alive not only because we got to name them (often making fun of our friends in class) but also because they gave us more points. As did the possessions we were able to keep. 

By 1990 with a changing tide, the game came to DOS and by 1991 it was ported to the Mac. Mouse support was added in 1992 and it came to Windows 3 in 1993. Softkey released The Oregon Trail: Classic Edition. And 

by 1995 The Oregon Trail made up a third of the MECC budget, raking in $30 million per year, and helped fund other titles. Oregon Trail II came in 95, 3 in 97, 4 in 99, and 5 made it into the new millennia in 2001. All being released for Windows and Mac. And 10 years later it would come to the modern era of console gaming, making it to the Wii and 3DS. 

And you can learn all of what we learned by playing the game on ( ). The Internet Archive page shows the 1990 version that was ported and made available for the Apple II, Macintosh, Windows, and DOS. The Internet Archive page alone has had nearly 7.2 million views. But the game has sold over 65 million copies as well. 

The Oregon Trail is beloved by many. I see shirts with You Have Died of DIssentary and card versions of the game in stores. I’ve played in Facebook games and mobile versions. It’s even been turned into plays and parodied in TV shows. That wagon is one of the better known symbols of all time in gaming lore. And we still use many of the game mechanics introduced then, in games from Dragon Warrior to the trading and inventory system inspiring the World of Warcraft. 

We can thank The Oregon Trail for giving our teachers a beak from teaching us in school and giving us a break from learning. Although I suspect we learned plenty. And we can thank MECC for continuing the fine tradition of computer science in Minnesota. And we can thank Don for inspiring millions, many of which went on to create their own games.

And thank you, listener, for tuning in to this episode of The History of Computing Podcast. We are so so so lucky to have you. Have a great day! And keep in mind, a steady pace will get you to the end of the trail before the snows come in, with plenty of time to take ferries across the rivers. Rest when you need it. And no, you probably aren’t likely to beat my high score. 



SimCity is one those games that helped expand the collective public consciousness of humans. 

I have a buddy who works on traffic flows in Minneapolis. When I asked how he decided to go into urban planning, he quickly responded with “playing SimCity.” Imagine that, a computer game inspiring a generation of people that wanted to make cities better. How did that come to be?

Will Wright was born in 1960. He went to Louisiana State University then Louisiana Tech and then to The New School in New York. By then, he was able to get an AppleII+ and start playing computer games, including Life, a game initially conceived by mathematician John Conway in 1970. A game that expanded the minds of every person that came in contact with it. That game had begun on the PDP, then in BBC BASIC before spreading around. It allowed players to set an initial configuration for cells and watch them mutate over time. 

After reading about LIFE, Wright wanted to port it to his Apple, so he learned Applesoft BASIC and PASCAL. He tinkered and By 1984 was able to produce a game called Raid on Bungeling Bay. And as many a Minecrafter can tell you, part of the fun was really building the islands in a map editor he built for the game. He happened to also be reading about urban planning and system dynamics. He just knew there was something there. Something that could take part of the fun from Life and editing maps in games and this newfound love of urban planning and give it to regular humans. Something that just might expand our own mental models about where we live and about games. 

This led him to build software that gamified editing maps. Where every choice we made impacted the map over time. Where it was on us to build the perfect map. That game was called Micropolis and would become SimCity. One problem, none of the game publishers wanted to produce it when it was ready for the Commodore 64 in 1985. After Brøderbund turned him down, he had to go back to the drawing board. 

So Wright would team up with his friend Jeff Braun who had founded Maxis Software in 1987. They would release SimCity in 1989 for Mac and Amiga and once it had been ported, for the Atari ST, DOS-based PCs, and the ZX Spectrum. Brøderbund did eventually agree to distribute it as it matured. 

And people started to get this software, staring at a blank slab of land where we zone areas as commercial and residential. We tax areas and can increment those rates, giving us money to zone other areas, provide electricity, water, and other services, and then build parks, schools, hospitals, police stations, etc. The more dense and populous the city becomes, the more difficult the game gets. The population fluctuates and we can tweak settings to grow and shrink the city. I was always playing to grow, until I realized sometimes it’s nice to stabilize and look for harmony instead.

And we see the evolution over time. The initial choices we made could impact the ability to grow forever. But unlike Life we got to keep making better and better (or worse and worse) choices over time. We delighted in watching the population explode. In watching the city grow and flourish. And we had to watch parts of our beloved city decay. We raised taxes when we were running out of money and lowered them when population growth was negatively impacted. We built parks and paid for them. We tried to make people love our city. 

We are only limited in how great a city we can build by our own creativity. And our own ability to place parts of the city alongside the features that let the people live in harmony with the economic and ecological impacts of other buildings and zones. For example, build a power plant as far from residential buildings as you can because people don’t want to live right by a power plant. But running power lines is expensive, so it can’t be too far away in the beginning. 

The game mechanics motivate us to push the city into progress. To build. To develop. People choose to move to our cities based on how well we build them. It was unlike anything else out there. And it was a huge success. 

SimCity 2000 came along in 1993. Graphics had come a long way and you could now see the decay in the icons of buildings. It expanded the types of power plants we could build, added churches, museums, prisons and zoos. - each with an impact to the way the city grows. As the understanding of both programming and urban planning grew for the development team, they added city ordinances. The game got more and more popular. 

SimCity 3000 was the third installment in the series, which came out in 1999. By then, the game had sold over 5 million copies. That’s when they added slums and median incomes to create a classification. And large malls, which negatively impact smaller commercial zones. And toxic waste conversion plants. And prisons, which hits residential areas. And casinos, which increase crime. But each has huge upside as well. As graphics cards continued to get better, the simulation also increased, giving us waterfalls, different types of trees, more realistic grass, and even snow. 

Maxis even dabbled with using their software to improve businesses. Maxis Business Simulations built software for refineries and health as well.  

And then came The Sims, which Wright though of after losing his house to a fire in 1991. Here, instead of simulating a whole city of people at once, we simulated a single person, or a Sim. And we attempted to lead a fulfilling life by satisfying the needs and desires of our sim, buying furniture, building larger homes, having a family, and just… well, living life. But the board at Maxis didn’t like the idea. Maxis was acquired by Electronic Arts in 1997. And they were far more into the Sims idea, so The Sims was released in 2000. And it has sold nearly 200 million copies and raked in over $5 billion dollars in sales, making it one of the best-selling games of all times. Even though now it’s free on mobile devices with tons of in app purchases… 

And after the acquisition of Maxis, SimCity is now distributed by EA. Sim 4 would come along in 2003, continuing to improve the complexity and game play. And with processors getting faster, cities could get way bigger and more complex. SimCity 6 came in 2013, from lead designer Stone Librande and team. They added a Google Earth type of zoom effect to see cities and some pretty awesome road creation tools. And the sounds of cars honking on streets, birds chirping, airplanes flying over, and fans cheering in stadiums were amazing. They added layers so you could look at a colorless model of the city highlighting crime or pollution, to make tracking each of the main aspects of the game easier. Like layers in Photoshop. It was pretty CPU and memory intensive but came with some pretty amazing gameplay. In fact, some of the neighborhood planning has been used to simulate neighborhood development efforts in cities. 

And the game spread to consoles as well, coming to iPhone and web browsers in 2008. I personally choose not to play any more because I’m not into in-app purchasing. 

A lot of science fiction films center around two major themes: either societies enter into a phase of utopia or dystopia. The spread of computing into first our living rooms in the form of PCs and then into our pockets via mobile devices has helped push us into the utopian direction. 

SimCity inspired a generation of city planners and was inspired by more and more mature research done on urban planning. A great step on the route to a utopia and eye opening as to the impact our city planning has on advances towards a dystopian future. We were all suddenly able to envision better city planning and design, making cities friendlier for walking, biking, and being outdoors. Living better. Which is important in a time of continued mass urbanization. 

Computer games could now be about more than moving a dot with a paddle or controlling a character to shoot other characters. Other games with an eye opening and mind expanding game play were feasible. Like Sid Myers’ Civilization, which came along in 1991. But SimCity, like Life, was another major step on the way to where we are today. And it’s so relatable now that I’ve owned multiple homes and seen the impact of tax rates and services the governments in those areas provide. 

So thank you to Will Wright. For inspiring better cities. And thank you to the countless developers, designers, and product managers, for continuing the great work at Maxis then EA. 



Today, we think of Pixar as the company that gave us such lovable characters as Woody and Buzz Lightyear, Monsters Mike Wazowski and James P Sullivan, Nemo, Elastagirl, and Lightnight McQueen. But all that came pretty late in the history of the company.

Let’s go back to the 70s. Star Wars made George Lucas a legend. His company Lucasfilm produced American Graffiti, the Star Wars Francise, the Indiana Jones Francis, The Labrynth, Willow, and many others. Many of those movies were pioneering in the use of visual effects in storytelling. At a time when the use of computer-aided visual effects was just emerging. So Lucas needed world-class computer engineers.


Lucas found Ed Catmull and Alvy Ray Smith at the New York Institute of Technology Computer Graphics Lab. They had been hired by the founder, Alexander Schure, to help create the first computer-animated film in the mid-70s. But Lucas hired Catmull (who had been a student of the creator of the first computer graphics software, Sketchpad) and Smith (who had worked on SuperPaint at Xerox PARC) away to run the computer division of Lucasfilm, which by 1979 was simply called the Graphics Group. 


They created REYES and developed a number of the underlying techniques used in computer graphics today. They worked on movies like Star Trek II where the graphics still mostly stand up nearly 40 years later. And as the group grew, the technology got more mature and more useful. REYES would develop into RenderMan and become one of the best computer graphics products on the market. Pioneering, they won prizes in science and film. RenderMan is still one of the best tools available for computer-generated lighting, shading, and shadowing.


John Lasseter joined in 1983. And while everything was moving in the right direction, in the midst of a nasty divorce when he needed the cash, Lucas sold the group as a spin-off to Steve Jobs in 1986. Jobs had just been ousted from Apple and was starting NeXT. He had the vision to bring the computer graphics to homes. They developed The Pixar Image Computer for commercial sales, which would ship just after Jobs took over the company. It went for $135,000 and still required an SGI or Sun computer to work. They’d sell just over 100 in the first two years - most to Disney. 


The name came from Alvy Ray Smith’s original name he suggested for the computer, Picture Maker. That would get shortened to Pixer, and then Pixar. The technology they developed along the way to the dream of a computer animated film was unparalleled in special effects. But CPUs weren’t going fast enough to keep up. 


The P-II model came with a 3 gig RAID (when most file systems couldn’t even access that much space), 4 processors, multiple video cards, 2 video processors, a channel for red, blue green, and alpha. It was a beast. 


But that’s not what we think of when we think of Pixar today. You see, they had always had the desire to make a computer animated movie. And they were getting closer and closer. Sure, selling computers to aid in the computer animation is the heart of why Steve Jobs bought the company - but he, like the Pixar team, is an artist. They started making shorts to showcase what the equipment and software they were making could do. 


Lasseter made a film called Luxo Jr in 1986 and showed it at SIGGRAPH, which was becoming the convention for computer graphics. They made a movie every year, but they were selling into a niche market and sales never really took off. Jobs pumped more money into the company. He’d initially paid $5 million dollars and capitalized the company with another $5 million. By 1989 he’d pumped $50 million into the company. But when sales were slow and they were bleeding money, Jobs realized the computer could never go down market into homes and that part of the business was sold to Vicom in 1990 for $2 million, who then went bankrupt.


But the work Lasseter was doing blending characters that were purely made using computer graphics with delicious storytelling. Their animated short Tin Toy won an Academy Award in 1988. And being an artist, during repeated layoffs, that group just continued to grow. They would release more and more software - and while they weren’t building computers, the software could be run on other computers like Macs and Windows. 


The one bright spot was that Pixar and the Walt Disney Animation Studio were inseparable. By 1991 though, computers had finally gotten fast enough, and the technology mature enough, to make a computer-animated feature. And this is when Steve Jobs and Lasster sold the idea of a movie to Disney. In fact, they got $24 million to make three features. They got to work on the first of their movie. Smith would leave in 1994, supposedly over a screaming match he had with Jobs over the use of a whiteboard. But if Pixar was turning into a full-on film studio, it was about to realize the original dream they all had of creating a computer-animated motion picture and it’s too bad Smith missed it.


That movie was called Toy Story. It would bring in $362 million dollars globally becoming the highest-grossing movie of 1995 and allow Steve Jobs to renegotiate the Pixar deal with Disney and take the company public in 1995. His $60 million investment would convert into over a billion dollars in Pixar stock that became over a hundred thousand shares of Disney stock worth over $4 billion, the largest single shareholder. Those shares were worth $7.4 billion dollars when he passed away in 2011. His wife would sell half in 2017 as she diversified the holdings. 225x on the investment. 


After Toy Story, Pixar would create Cars, Finding Nemo, Wall-E, Up, Onward, Mosters Inc, Ratatouille, Brave, The Incredibles, and many other films. Movies that have made close to $15 billion dollars. But more importantly, they mainstreamed computer animated films. And another huge impact on the history of computing was that they made Steve Jobs a billionaire and proved to Wall Street that he could run a company. After a time I think of as “the dark ages” at Apple, Jobs came back in 1996, bringing along an operating system and reinventing Apple - giving the world the iMac, the iPod, and the iPhone. And streamlining the concept of multi-media enough that music and later film and then software, would be sold through Apple’s online services, setting the groundwork for Apple to become the most valuable company in the world. 


So thank you to everyone from Pixar for the lovable characters, but also for inventing so much of the technology used in modern computer graphics - both for film and the tech used in all of our computers. And thank you for the impact on the film industry and keeping characters we can all relate to at the forefront of our minds. And thank you dear listener for tuning in to yet another episode of the History of Computing Podcast. We are so lucky to have you. And lucky to have all those Pixar movies. I think I’ll go watch one now. But I won’t be watching them on the Apple streaming service. It’ll be on Disney service. Funny how that worked out, aint it.



Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover yet another of the groundbreaking technologies to come out of MIT: Sketchpad. 

Ivan Sutherland is a true computer scientist. After getting his masters from Caltech, he migrated to the land of the Hackers and got a PhD from MIT in 1963. The great Claud Shannon supervised his thesis and Marvin Minsky was on the thesis review committee. But he wasn’t just surrounded by awesome figures in computer science, he would develop a critical piece between the Memex in Vannevar Bush’s “As We May Think” and the modern era of computing: graphics. 

What was it that propelled him from PhD candidate to becoming the father of computer graphics? The 1962-1963 development of a program called Sketchpad. Sketchpad was the ancestor of the GUI, object oriented programming, and computer graphics. In fact, it was the first graphical user interface. And it was all made possible by the TX-2, a computer developed at the MIT Lincoln Laboratory by Wesley Clark and others. The TX-2 was transistorized and so fast. Fast enough to be truly interactive. A lot of innovative work had come with the TX-0 and the program would effectively spin off as Digital Equipment Corporation and the PDP series of computers. 

So it was bound to inspire a lot of budding computer scientists to build some pretty cool stuff. Sutherland’s Sketchpad used a light pen. These were photosensitive devices that worked like a stylus but would send light to the display, activating dots on a cathode ray tube (CRT). Users could draw shapes on a screen for the first time. Whirlwind at MIT had allowed highlighting objects, but this graphical interface to create objects was a new thing altogether, inputing data into a computer as an object instead of loading it as code, as could then be done using punch cards. 

Suddenly the computer could be used for art. There were toggle-able switches that made lines bigger. The extra memory that was pretty much only available in the hallowed halls of government-funded research in the 60s opened up so many possibilities. Suddenly, computer-aided design, or CAD, was here. 

Artists could create a master drawing and then additional instances on top, with changes to the master reverberating through each instance. They could draw lines, concentric circles, change ratios. And it would be 3 decades before MacPaint would bring the technology into homes across the world. And of course AutoCAD, making Autodesk one of the greatest software companies in the world. 

The impact of Sketchpad would be profound. Sketchpad would be another of Doug Englebart’s inspirations when building the oN-Line System and there are clear correlations in the human interfaces. For more on NLS, check out the episode of this podcast called the Mother of All Demos, or watch it on YouTube. 

And Sutherland’s work would inspire the next generation: people who read his thesis, as well as his students and coworkers. 

Sutherland would run the Information Processing Techniques Office for the US Defense Department Advanced Research Project Agency after Lick returned to MIT. He also taught at Harvard, where he and students developed the first virtual reality system in 1968, decades before it was patented by VPL research in 1984. Sutherland then went to the University of Utah, where he taught Alan Kay who gave us object oriented programming in smalltalk and the concept of the tablet in the Dynabook, and Ed Catmull who co-founded Pixar and many other computer graphics pioneers. 

He founded Evans and Sutherland, with the man that built the computer science department at the University of Utah and their company launched the careers of John Warnock, the founder of Adobe and Jim Clark, the founder of Silicon Graphics. His next company would be acquired by Sun Microsystems and become Sun Labs. He would remain a Vice President and fellow at Sun and a visiting scholar at Berkeley. 

For Sketchpad and his other contributions to computing, he would be awarded a Computer Pioneer Award, become a fellow at the ACM, receive a John von Neumann Medal, receive the Kyoto Prize, become a fellow at the Computer History Museum, and receive a Turing Award. 

I know we’re not supposed to make a piece of software an actor in a sentence, but thank you Sketchpad. And thank you Sutherland. And his students and colleagues who continued to build upon his work.

One Year Of History Podcasts




The first episode of this podcast went up on July 7th 2019. One year later, we’ve managed to cover a lot of ground, but we’re just getting started. Over 70 episodes and so far, my favorite was on Mavis Beacon Teaches Typing.

They may seem disconnected at times, but they’re not. There’s a large outline and it’s all research being included in my next book.

The podcast began with an episode on the prehistory of the computer. And we’ve had episodes on the history of batteries, electricity, superconductors, and more - to build up to what was necessary in order for these advances in computing to come to fruition.

We’ve celebrated Grace Hopper and her contributions. But we’d like to also cover a lot of other diverse voices in computing. 

There was a series on Windows, covering Windows 1, 3, , and 95. But we plan to complete that series with a look at 98, Millineum, NT, 2000, and on. We covered Android, CP/M, OS/2 and VMS but want to get into the Apple operating systems, SUN, and Linux, etc.

Speaking of Apple… We haven’t gotten started with Apple. We covered the lack of an OS story in the 90s - but there’s a lot to unpack around the founding of Apple, Steve Jobs and Woz, and the re-emergence of Apple and their impact there. 

And since that didn’t happen in a vacuum, there were a lot of machines in that transition from the PC being a hobbyist market to being a full-blown industry. We talked through Radioshack, Commodore, the Altair, the Xerox Alto, 

We have covered some early mainframes like the Atanasoff-Berry Computer, ENIAC, the story of Z-1 and Zuse, and even supercomputers like Cray, but still need to tell the later story, bridging the gap between the mainframe, the minicomputer, and traditional servers we might find in a data center today. 

We haven’t told the history of the Internet. We’ve touched on bits and pieces, but want to get into those first nodes that got put onto ARPAnet, the transition to NSFnet, and the merging of the nets into the Internet. And we covered sites like Friendster, Wikipedia, and even the Netscape browser, but the explosion of the Internet has so many other stories left to tell. Literally a lifetime’s worth. 

For example, we covered Twitter and Snapchat but Google and Facebook

We covered the history of object-oriented languages. We also covered BASIC, PASCAL, FORTRAN, ALGOL, Java, But still want to look at AWS and the modern web service architecture that’s allowed for an explosion of apps and web apps. 

Mobility. We covered the Palm Pilot and a little on device management, but still need to get into the iPhone and Samsung and the underlying technology that enabled mobility. 

And enterprise software and compliance.

Knowing the past informs each Investment thesis. We covered Y Combinator but there are a lot of other VC/Private equity firms to look at.

But what I thought I knew of the past isn’t always correct. As an example, coming from the Apple space, we have a hero worship of Steve Jobs that, for example, reading the Walter Isaacson book often conflicts with. He was a brilliant man, but complicated. And the more I read and research, the more I need to unpack many of own assumptions across the industry. 

I was here for a lot of this, yet my understanding is still not what it could be.

Interviews from people who wrote code to put on lunar landers, who invented technology like spreadsheets, 

I wish more people could talk about their experiences openly, but even 40 years later, some are still bound by NDAs

I’ve learned so much and I look forward to learning so much more!

The History Of Python


Haarlem, 1956. No, this isn’t an episode about New York, we’re talking Haarlem, Netherlands. Guido Van Rossum is born then, and goes on to college in Amsterdam where he gets a degree in math and computer science. He went on to work at the Centrum Wiskunde & Informatica, or CWI. Here, he worked on BSD Unix and the ABC Programming language, which had been written by Lambert Meertens, Leo Geurts, and Steven Pemberton from CWI. 

He’d worked on ABC for a few years through the 1980s and started to realize some issues. It had initially been a monolithic implementation, which made it hard to implement certain new features, like being able to access file systems and functions within operating systems. But Meertens was an editor of the ALGOL 68 Report and so ABC did have a lot of the ALGOL 68 influences that are prevalent in a number of more modern languages and could compile for a number of operating systems. It was a great way to spend your 20s if you’re Guido.

But after some time building interpreters and operating systems, many programmers think they have some ideas for what they might do if they just… started over. Especially when they hit their 30s. And so as we turned the corner towards the increasingly big hair of the 1990s, Guido started a new hobby project over the holiday break for Christmas 1989. 

He had been thinking of a new scripting language, loosely based on ABC. One that Unix and C programmers would be interested in, but maybe not as cumbersome as C had become. So he got to work on an interpreter. One that those open source type hackers might be interested in. ALGOL had been great for math, but we needed so much more flexibility in the 90s, unlike bangs. Bangs just needed Aquanet.

He named his new creation Python because he loved Monty Python’s Flying Circus. They had a great TV show from 1969 to 1974, and a string of movies in the 70s and early 80s. They’ve been popular amongst people in IT since I got into IT.

Python is a funny language. It’s incredibly dynamic. Like bash or a shell, we can fire it up, define a variable and echo that out on the fly. But it can also be procedural, object-oriented, or functional. And it has a standard library but is extensible so you can add libraries to do tons of new things that wouldn’t make sense to be built in (and so bloat and slow down) other apps. For example, need to get started with big array processing for machine learning projects? Install TensorFlow or Numpy. Or according to your machine learning needs you have PyTorch, SciPi, Pandas, and the list goes on. 

In 1994, 20 developers met at the US National Standards Bureau in Maryland, at the first workshop and the first Python evangelists were minted. It was obvious pretty quickly that the modular nature and ease of scripting, but with an ability to do incredibly complicated tasks, was something special. What was drawing this community in. Well, let’s start with the philosophy, the Zen of Python as Tim Peters wrote it in 1999:

  • Beautiful is better than ugly.
  • Explicit is better than implicit.
  • Simple is better than complex.
  • Complex is better than complicated.
  • Flat is better than nested.
  • Sparse is better than dense.
  • Readability counts.
  • Special cases aren't special enough to break the rules.
  • Although practicality beats purity.
  • Errors should never pass silently.
  • Unless explicitly silenced.
  • In the face of ambiguity, refuse the temptation to guess.
  • There should be one—and preferably only one—obvious way to do it.
  • Although that way may not be obvious at first unless you're Dutch.
  • Now is better than never.
  • Although never is often better than right now.[a]
  • If the implementation is hard to explain, it's a bad idea.
  • If the implementation is easy to explain, it may be a good idea.
  • Namespaces are one honking great idea—let's do more of those!

Those are important enough to be semi-official and can be found by entering “import this” into a python shell. Another reason python became important is that it’s multi-paradigm. When I said it could be kinda’ functional. Sure. Use one big old function for everything if you’re moving from COBOL and just don’t wanna’ rethink the world. Or be overly object-oriented when you move from Java and build 800 functions to echo hello world in 800 ways. Wanna map reduce your lisp code. Bring it. Or add an extension and program in paradigms I’ve never heard of. The number of libraries and other ways to extend python out there is pretty much infinite. 

And that extensibility was the opposite of ABC and why Python is special. This isn’t to take anything away from the syntax. It’s meant to be and is an easily readable language. It’s very Dutch, with not a lot of frills like that. It uses white space much as the Dutch use silence. I wish it could stare at me like I was an idiot the way the Dutch often do. But alas, it doesn’t have eyeballs. Wait, I think there’s a library for that. 

So what I meant by white space instead of punctuation is that it uses an indent instead of a curly bracket or keyword to delimit blocks of code. Increase the tabbing and you move to a new block. Many programmers do this in other languages just for readability. Python does it for code. 

Basic statements included, which match or are similar to most languages, include if, for, while, try, raise, except, class, def, with, break, continue, pass, assert, yield, import and print until python 3 when that became a function. It’s amazing what you can build with just a dozen and a half statements in programming. You can have more, but interpreters get slower and compilers get bigger and all that… 

Python also has all the expressions you’d expect in a modern language, especial lambdas. And methods. And duck typing, or suitability for a method is determined by the properties of an object rather than the type. This can be great. Or a total pain. Which is why they’ll eventually be moving to gradual typing. 

The types of objects are bool, byte array, bytes, complex, dict, ellipsis (which I overuse), float, frozen set, int, list, NoneType (which I try to never use), NotImplementedType, range, set, str, and tuple so you can pop mixed tapes into a given object. Not to be confused with a thruple, but not to not be confused I guess… 

Another draw of python was the cross-compiler concept. An early decision was to make python cable to talk to c. This won over the Unix and growing Linux crowds. And today we have cross-compilers for C and C++, Go, .Net, Java, R, machine code, and of course, Java.  

Python 2 came in 2000. We got a garbage collection system and a few other features and 7 point releases over the next 10 years. Python 3 came in 2008 and represented a big change. It was partially backward-compatible but was the first Python release that wasn’t fully backward-compatible. We have had 7 point releases in the past 10 years as well. 3 brought changes to function print, simpler syntax, moved to storing strings in unicode by default, added a range function, changed how global variables react inside for-loops, implemented a simpler set of rules for order comparisons, and much more. 

At this point developers were experimenting with deploying microservices. Microservices is an a software development architecture where we build small services, perhaps just a script or a few scripts daisy chained together, that do small tasks. These are then more highly maintainable, more easily testable, often more scalable, can be edited and deployed independently, can be structured around capabilities, and each of the services can be owned by the team that created it with a contract to ensure we don’t screw over other teams as we edit them. 

Amazon introduced AWS Lambda in 2014 and it became clear quickly that the new micro services paradigm was accelerating the move of many SaaS-based tools to a micro services architecture. Now, teams could build in node or python or java or ruby or c# or heaven forbid Go. They could quickly stand up a small service and get teams able to consume the back end service in a way that is scalable and doesn’t require standing up a server or even a virtual server, which is how we did things in EC2. The containerization concept is nothing new. We had chroot in 1979 with Unix v7 and Solaris brought us containerization in 2004. But those were more about security. Docker had shown up in 2013 and the idea of spinning up a container to run a script and give it its own library and lib container, that was special. And Amazon made it more so. 

Again, libraries and modularization. And the modular nature is key for me. Let’s say you need to do image processing. Pillow makes it easier to work with images of almost any image type you can think of. For example, it can display an image, convert it into different types, automatically generate thumbnails, run sooth, blur, contour, and even increase the detail. Libraries like that take a lot of the friction out of learning to display and manage images. 

But Python can also create its own imagery. For example, Matplotlib generates two dimensional graphs and plots points on them. These can look as good as you want them to look and actually allows us to integrate with a ton of other systems. 

Van Rossum’s career wasn’t all python though. He would go on to work at NIST then CNRI and Zope before ending up at Google in 2005, where he created Mondrian, a code review system. He would go to Dropbox in 2013 and retire from professional life in 2019. He stepped down as the “Benevolent dictator for life” of the Python project in 2018 and sat on the Python Steering Council for a term but is no longer involved. It’s been one of the most intriguing “Transfers of power” I’ve seen but Python is in great hands to thrive in the future. This is the point when Python 2 was officially discontinued, and Python 3.5.x was thriving. 

By thriving, as of mid-202, there are over 200,000 packages in the Python Package Index. Things from web frameworks and web scraping to automation, to graphical user interfaces, documentation, databases, analytics, networking, systems administrations, science, mobile, image management and processing. If you can think of it, there’s probably a package to help you do it. And it’s one of the easier languages. 

Here’s the thing. Python grew because of how flexible and easy it is to use. It didn’t have the same amount of baggage as other languages. And that flexibility and modular nature made it great for workloads in a changing and more micro-service oriented world. Or, did it help make the world more micro-service oriented. It was a Christmas hobby project that has now ballooned into one of the most popular languages to write software in the word. You know what I did over my last holiday break? Sleep. I clearly should have watched more Monty Python so the short skits could embolden me to write a language perfect for making the programmers equivalent, smaller, more modular scripts and functions. So as we turn the corner into all the holidays in front of us, consider this while stuck at home, what hobby project can we propel forward and hopefully end up with the same type of impact Guido had. A true revolutionary in his own right. 

So thank you to everyone involved in python and everyone that’s contributed to those 200k+ projects. And thank you, listeners, for continuing to tun in to the history of computing podcast. We are so lucky to have you.

The Great Firewall of China


“If you open the window, both fresh air and flies will be blown in.” Deng Xiaoping perfectly summed up the Chinese perspective on the Internet during his 11 year tenure as the president of the People’s Republic of China, a position he held from 1978 to 1989. Yes, he opened up China with a number of market-economy reforms and so is hailed as the “Architect of Modern China.” However, he did so with his own spin. 

The Internet had been on the rise globally and came to China in 1994. The US had been passing laws since the 1970s to both aid and limit the uses of this new technology, but China was slow to the adoption up until this point. 

1997, the Ministry of Public Security prohibits the use of the Internet to “disclose state secrets or injure the interests of the state or society. The US had been going through similar attempts to limit the Internet with the Telecommunications Decency Act in 1996 and the US Supreme Court ended up striking that down in 1997. And this was a turning point for the Internet in the US and in China. Many a country saw what was about to happen and governments were grappling with how to handle the cultural impact of technology that allowed for unfettered globally interconnected humans. 

By 1998, the Communist Party stepped in to start a project to build what we now call the Great Firewall of China. They took their time and over eight years but a technology that they could fully control. Fang Binxing graduated with a PhD from Harbin Institute of Technology and moved to the National Computer Network Emergency Response technical Team where he became the director in 2000. It’s in this capacity that he took over creating the Great Firewall.

They watched what people were putting on the Internet and by 2002 were able to make 300 arrests. They were just getting started and brought 10s of thousands of police in to get their first taste of internet and video monitoring and of this crazy facial recognition technology. 

By 2003 China was able to launch the Golden Shield Project. Here, they straight-up censored a number of web sites, looking for pro-democracy terms, news sources that spoke out in favor of the Tiananmen Square protests, anyone that covered police brutality, and locked down the freedom of speech. They were able to block blogs and religious organizations, lock down pornography, and block anything the government could consider subversive, like information about the Dalai Lama. 

And US companies played along. Because money. Organizations like Google and Cisco set up systems in the country and made money off China. But also gave ways around it, like providing proxy servers and VPN software. We typically lump Golden Shield and the Great Firewall of China together, but Golden Shield was built by Shen Changxiang and the Great Firewall is mainly run in the three big internet pipes coming into the country, basically tapping the gateway in and out, where Golden Shield is more distributed and affiliated with public security and so used to monitor domestic connections. 

As anyone who has worked on proxies and various filters know, blocking traffic is a constantly moving target. The Chinese government blocks IP addresses and ranges. New addresses are always coming online though. They implement liar DNS and hijack DNS, sometimes providing the wrong IP to honeypot certain sites. But people can build local hosts files and do DNS over TLS. They use transparent proxies to block, or filter, specific URLs and URI schemes. That can be keyword based and bypassed by encrypting server names. 

They also use more advanced filtering options. Like Packet forging where they can do a TCP reset attack which can be thwarted by ignoring the resets. And of course man in the middle attacks, because you know, state owned TLS so they can just replace the GitHub, Google, or iCloud certs - with has each happened. They employ quality of service filtering. This is deep packet inspection that mirrors traffic and then analyze and create packet loss to slow traffic to unwanted sites. This helps thwart VPNs, SSH Tunneling and Tor but can be bypassed by spoofing good traffic, or using pluggable transports. Regrettably that can be as processor intensive as the act of blocking. Garlic routing is used when onion routing can’t be. 

All of this is aided by machine learning. Because like we said, it’s a constantly moving target. And ultimately, pornography and obscene contact is blocked. Discussion about protests is stomped out. Any descent about whether Hong Kong or Taiwan are part of China is disappeared. Democracy is squashed. 

By 2006, Chinese authorities could track access both centrally and from local security bureaus. The government could block and watch what the people were doing. Very 1984. By 2008, Internet cafe’s were logging which customers used which machines. Local officials could crack down further than the central government or tow the party line. 

2010, Google decides they’re not playing along any more and shuts down their own censoring. 2016, the WTO defines the Great Firewall as a trade barrier. Wikipedia has repeatedly been blocked and unblocked since the Chinese version was launched in 2001 but as of 2019 all Wikipedia versions are completely blocked in China.

The effect of many of these laws and engineering projects has been to exert social control over the people of China. But it also acts as a form of protectionism. Giving the people Baidu and not Google means a company like Baidu has a locked in market, making Baidu worth over $42 billion. Sure, Alphabet, the parent of Google, is worth almost a trillion dollars but in their minds, at least China is protecting some market for Baidu. And giving the people Alibaba instead of Amazon gives people the ability to buy goods and China protects a half-trillion dollar market capitalized company, in moneys that would be capitalizing Amazon, who currently stands at $1.3 trillion. 

Countries like Cuba and Zimbabwe then leverage technology from China to run their own systems. With such a large number of people only able to access parts of the Internet that their government feels is ok, many have referred to the Internet as the Splinternet. China has between 700 and 900 million internet users with over half using broadband and over 500 million using a smart phone. But the government owns the routes they use in the form of CSTNET, ChinaNet, CERNET, and CHINAGBN but expanding to 10 access points in the last few years, to handle the increased traffic. 

Sites like Tencent and provide access to millions of users. With that much traffic they’re now starting to export some technologies, like TikTok, launched in 2016. And whenever a new app or site comes along based in China, it often comes with plenty of suspect. And sometimes that comes with a new version of TikTok that removes potentially harmful activity. 

And sometimes Baidu Maps and Tianditu are like Google Maps but Chinese like the skit in the show Silicon Valley. Like AliPay for Stripe. Or Soso Baike for Wikipedia. And there are plenty of viral events in China that many Americans miss, like the Black Dorm Boys or Sister Feng. Or “very erotic, very violent” or the Baidu 10 Mythical Creatures and the list goes on. And there’s a China slang like 520 meaning I love You or 995 meaning Help. More examples of splinternetting or just cultural differences? You decide.

And the protectionism. That goes a lot of different ways. N Jumps is Chinese slang to refer to the number of people that jump out of windows at Foxconn factories. We benefit from not-great working conditions. The introduction of services and theft of intellectual property would be a place where the price for that benefit is paid in full. And I’ve seen it estimated that roughly a third of sites are blocked by the firewall, a massive percentage and places where some of the top sites do not benefit from Chinese traffic. 

But suffice it to say that the Internet is a large and sprawling place. And I never want to be an apologist. But some of this is just cultural differences. And who am I to impose my own values on other countries when at least they have the Interwebs - online North Korea. Oh, who am I kidding… Censorship is bad. And the groups that have risen to give people the Internet and rights to access it and help people bypass controls put in place by oppressive governments. Those people deserve our thanks.

So thank you to everyone involved. Except the oppressors. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. Now go install ToR, if only to help those who need to access modern memes to do so. Your work is awesome sauce.

Have a great day.

The Great Web Blackout of 1996


The killing of George Floyd at the hands of police in Minneapolis gave the Black Lives Matter movement a new level of prominence and protesting racial injustice jumped into the global spotlight with protests spreading first to Louisville and then to practically every major city in the world. 

Protesting is nothing new but the impacts can be seen far and wide. From the civil rights protests and Vietnam War protests in the 60s they are a way for citizens to use their free speech to enact social change. After all, Amendment I states that "Congress shall make no law ... abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble."

The 90s was a weird time. In many ways desecularization was gaining momentum in the US and many of the things people feared have turned out to become reality. Many have turned their backs on religion in favor of technology. Neil Gaiman brought this concept to HBO by turning technology into a God. And whether they knew that was what they were worried about or not, the 90s saw a number of movements meant to impose the thought police into intruding into every day life. Battle lines were drawn by people like Tipper Gore, who wanted to slap a label on music and a long and steady backlash to those failures led to many of the culture battles we are fighting with today. These days we say “All Lives Matter” but we often really mean that life was simpler when we went to church.

And many go to church still. But not like we used to. Consider this. 70% of Americans went to church in 1976. Now it’s less than half. And less than a third have been to church in the past week. That shouldn’t take anything away from the impact religion has in the lives of many. But a societal shift has been occurring for sure. And the impact of a global, online, interconnected society is often under-represented.

Imagine this. We have a way of talking to other humans in practically every country in the world emerging. Before, we paid hefty long distance lines or had written communication that could take days or weeks to be delivered. And along comes this weird new medium that allowed us to talk to almost anyone, almost instantly. And for free. We could put images, sounds, and written words almost anonymously out there and access the same. And people did.

The rise of Internet porn wasn’t a thing yet. But we could come home from church and go online and find almost anything. And by anything, it could be porn. Today, we just assume we can find any old kind of porn anywhere but that wasn’t always the case. In fact, we don’t even consider sex education materials or some forms of nudity porn any more. We’ve become desensitized to it. But that wasn’t always the case. And that represented a pretty substantial change. And all societal changes, whether good or bad, deserve a good old fashioned backlash. Which is what the Telecommunications Decency Act title 5 was. 

But the Electronic Frontier Foundation (or EFF) had been anticipating the backlash. The legislation could fine or even incarcerate people for distributing offensive or indecent content. Battle lines were forming between those who wanted to turn librarians into the arbiters of free speech and those who thought all content should be open. 

Then as in now, the politicians did not understand the technology. They can’t. It’s not what got them elected. I’ve never judged that. But they understood that the boundaries of free speech were again being tested and they, as they have done for hundreds of years, wanted to try and limit the pushing of the boundaries. Because sometimes progress is uncomfortable. 

Enter the Blue Ribbon Online Free Speech Campaign, which the EFF was organizing and the Center for Democracy and Technology. The Blue Ribbon campaign encouraged site owners to post images of ribbons on their sites in support. Now, at this point, no one argued these were paid actors. They branded themselves as Netizens and planned to protest. A new breed of protests online and in person. And protest they did. They did not want their Internet or the Internet 25 years later that we have inherited, to be censored. 

Works of art are free. Access to medical information that some might consider scandalous is free. And yes, porn is often free. We called people who ran websites webmasters back then. They were masters of zeros and ones in HTML. The webmasters thought people making laws didn’t understand what they were trying to regulate. They didn’t. But lawmakers get savvier every year. Just as the Internet becomes harder to understand. 

People like Shabir Safdar were unsung heroes. Patrick Leahy, the democratic senator from Vermont spoke out. As did Yahoo and Netscape. They wanted to regulate the Internet like they had done the television. But we weren’t having it. And then, surprisingly Bill Clinton signed the CDA into law. The pioneers of the Internet jumped into action. From San Francisco to the CDT in Brussels, they planned to set backgrounds black. I remember it happening but was too young to understand what it meant at the time. I just thought they were cool looking. 

It was February 8, 1996. And backgrounds were changed for 48 hours. 

The protests were covered by CNN, Time Magazine, the New York Times, and Wired. It got enough attention so the ACLU jumped into the fight. And ultimately the Act was declared unconstitutional by the US Supreme Court in 1997. Sandra Day O’Connor and Chief Justice William Rehnquist wrote the opinion. It was 9-0. The Internet we have today, for better or worse, was free. As free for posting videos of police killing young black men as it is to post nudes, erotic fiction, or ads to buy viagra. Could it be done again some day? Yes. Will it? Probably. Every few years ago legislators try and implement another form of the act. SOPA, COPA, and the list goes on. But again and again, we find these laws struck down. The thought police had been thwarted. 

As recent as 2012, Reddit wants to protest against SOPA and PIPA - so they try to repeat the blackout. The protests bring enough attention for the Supreme Court to hear a case and the new laws get overturned. Because free speech. And there’s hate speech sprinkled in there as well. Because the Internet helps surface the best and worst of humanity. But you know what, we’re better off for having all of it out there in the open, as hurtful and wonderful and beautiful and ugly as it all can be, according to our perspectives. And that’s the way it should be. Because the knowledge of all of it helps us to grow and be better and address that which needs to be addressed. 

And society will always grapple with adapting to technological change. That’s been human nature since Prometheus stole fire and gave it to humanity. Just as we’ve been trying to protect intellectual property and combat piracy and everything else that can but up against accelerating progress. It’s hard to know where the lines should be drawn. And globalism in the form of globally connected computers doesn’t make any of that any easier. 

So thank you to the heroes who forced this issue to prominence and got the backing to fight it back in the 90s. If it had been over-regulated we might not have the Internet as it is today. Just as it should be. Thank you for helping to protect free speech. Thank you for practicing your free speech. And least of all, thank you for tuning in to this episode of the History of Computing Podcast. Now go protest something!

America Online (AOL)


Today we’re going to cover America Online, or AOL. 

The first exposure many people had to “going online’ was to hear a modem connect.

And the first exposure many had to electronic mail was the sound “you’ve got mail.”

But how did AOL rise so meteorically to help mainstream first going online in walled gardens and then connecting to the Internet?

It’s 1983. Steve Case joins a company called Control Video Corporation to bring online services to the now-iconic Atari 2600. CVC was bringing a service called Gameline to allow subscribers to rent games over a dialup connection. Case had grown up in Honolulu and then gone to Williams College in Massachusetts, which until the rise of the Internet culture had been a breeding ground for tech companies. Up to this point, the personal computer market had mostly been for hobbyists, but it was slowly starting to go mainstream. 

Case saw the power of pushing bits over modems. He saw the rise of ARPAnet and the merger of the nets that would create the Internet. The Internet had begun life as ARPAnet, a US Defense Department project, until 1981, when the National Science Foundation stepped in to start the process of networking non-defense-oriented computers. And by the time Case’s employer Control Video Corporation was trying to rent games for a dollar, something much larger than the video game market was starting to happen. 

From 1985 to 1993, the Internet, then mostly NSFNET, surged from 2,000 users to 2,000,000 users. In that time, Tim Berners-Lee created the World Wide Web in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. In 1993 NSFNET began the process of unloading the backbone and helped the world develop the Internet. 

And the AOL story in that time frame was similar to that of many other online services, which we think of today as Internet Service Providers. The difference was that today these are companies individuals pay to get them on the Internet and then they were connecting people to private nets. When AOL began life in 1985, they were called Quantum Computer Services. Case began as VP of Marketing but would transition to CEO in 1991. 

But Case had been charged with strategy early on and they focused on networking Commodore computers with a service they called Q-Link, or Quantum Link. Up until that point, most software that connected computers together had been terminal emulators. But the dialup service they built used the processing power of the Commodore to connect to services they offered, allowing it to be much more scalable. They kept thinking of things to add to the service, starting with online chat using a service called Habitat in 1986. And by 1988 they were adding dedicated fiction with a series they called QuantumLink Serial. 

By 1988 they were able add AppleLink for Apple users and PC Link for people with IBM computers and IBM clones. By 1989 they were growing far faster than Apple and the deal with Apple soured and they changed their name to America Online. They had always included games with their product, but included a host of other services like news, chat, and mail. CompuServe changed everything when they focused on connecting people to the Internet in 1989, a model that AOL would eventually embrace. 

But they were all about community from the beginning. They connected groups, provided chat communities for specific interests, and always with the games. That focus on community was paying off. The first Massively Multiplayer Online Role Playing Game, Dungeons and Dragons Neverwinter Nights got huge. Sure there had been communities and Massively Multiplayer games. So most of the community initiatives weren’t new or innovative, just done better than others had done it before them. 

They launched AOL for DOS in 1991 and AOL for Windows in 1992. At this point, you paid by the hour to access the network. People would dial in, access content, write back offline, then dial back in to send stuff. A lot of their revenue came from overages. But they were growing at a nice and steady pace. In 1993 they gave access to Usenet to users. 

In the early 90s, half of the CDs being pressed were for installing AOL on computers. By 1994 they hit a million subscribers. That’s when they killed off PC Link and Q-Link to focus on the AOL service and just kept growing. But there were challengers, and at the time, larger competitors in the market. CompuServe had been early to market connecting people to the Internet but IBM and Sears had teamed up to bring Prodigy to market. The three providers were known as the big three when modems ran at 9,600 bits per second.

But as the mid-90s came around they bought WebCrawler in 1995 and sold it to Excite shortly thereafter, inking a deal with Excite to provide search services. They were up to 3 million users. In 1996, with downward pressure on pricing, they went to a flat $19.95 pricing model. This led to a spike in usage that they weren’t prepared for and a lot of busy signals, which caused a lot of users to cancel after just a short time using the service. And yet, they continued to grow. They inked a deal with Microsoft for AOL to be bundled with Windows and the growth accelerated. 

1997 was a big year. Case engineered a three0way deal where WorldCom bought CompuServe for $1.2 billion in stock and then sold it to AOL. This made way for a whole slew of competitors to grow, which is an often-unanticipated result of big acquisitions. This was also the year they released AIM, which gave us our first taste of a network effect messaging service. Even after leaving AOL many a subscriber hung on to AIM for a decade. That’s now been replaced by What’s App, Facebook Messenger, Text Messaging, Snapchat to some degree, and messaging features inside practically every tool, from Instagram and Twitter to more community based solutions like Slack and Microsoft Teams. AIM caused people to stay online longer. Which was great in an hourly model but problematic in a flat pricing model. Yet it was explosive until Microsoft and others stepped in to compete with the free service. It lasted until it was shut down in 2017. By then, I was surprised it was still running to be honest. 

In 1998 AOL spent $4.2 Billion to buy Netscape. And Netscape would never be the same. Everyone thought the Internet would become a huge mall at that point. But instead, that would have to wait for Amazon to emerge as the behemoth they now are. 

In 1999, AOL launched AOL Search and hit 10 Million users. AOL invested $800 million in Gateway and those CompuServe users put another 2.2 million subscribers on the board. They also bought Mapquest for $1.1 billion dollars. And here’s the thing, that playbook of owning the browser, community content, a shopping experience, content-content, maps, and everything else was really starting to become a playbook that others would follow in the dark ages after the collapse of AOL. And yes, that would be coming. All empires over-extend themselves eventually. 

In Y2k they made over $4 billion in subscriptions. 15 years of hard work was paying off. With over 23 million subscribers, their market valuation was at $224 billion in today’s money and check this out, only half of the US was online. But they could sense the tides changing. We could all feel the broadband revolution in the air. Maybe to diversify or maybe to grow into areas they hadn’t, AOL merged with media congomerate Time Warner in 2001, by paying $165 billion dollars for them in what was then the biggest merger (or reverse merger maybe) in history. 

This was a defining moment for the history of the Internet. AOL was clearly leveraging their entry point into the internet as a means of pivoting to the online advertising market and Warner Cable brought them into broadband. But this is where the company became overextended. Yes, old media and new media were meeting but it was obvious almost immediately that this was a culture clash and the company never really met the growth targets. Not only because they were overextended but also because so much money was being pumped into Internet startups that there were barbarians at every gate. And of course, the dot com bubble burst. Oh, and while only 1% of homes had broadband, that market was clearly about to pop and leave companies like AOL in the dust. But, now Time Warner and Time Warner Cable would soften that blow as it came. 

2002, over 26 million users. And that’s when the decline began. By then 12% of homes in the US were wired up to broadband, likely DSL, or Digital Subscriber Lines, at that time. 

Case left AOL in 2003 and the words AOL would get dropped from the name. The company was now just Time Warner again. 2004 brings a half billion dollar settlement with the SEC for securities fraud. Oops. More important than the cash crunch, it was a horrible PR problem at a time when subscribers were falling off and broadband had encroached with over a quarter of US homes embracing faster internet usage than anything dialup could offer. 

The advertising retooling continued as the number of subscribers fell. In 2007 AOL moved to New York to be closer to those Mad Men. By the way, the show Mad Men happened to start that year. This also came with layoffs. And by then, broadband had blanketed half of the US. And now, wireless Internet was being developed, although it would not start to encroach until about 2013. 

AOL and Time Warner get a divorce in 2009 when AOL gets spun back off into its own standalone company and Tim Armstrong is brought in from Google to run the place. They bought his old company that year, to invest into more hyperlocal news. You know those little papers we all get for our little neighborhoods? They often don’t seem like tooooo much more than a zine from the 90s. Hyperlocal is information for a smaller community with a focus on the concerns and what matters to that cohort. 

2010 they buy TechCrunch, 2011 they buy The Huffington Post. To raise cash they sell off a billion dollars in patents to Microsoft in 2012. Verizon bought AOL in 2015 for $4.4 billion dollars. They would merge it with Yahoo! In 2017 as a company called Oath that is now called Verizon Media. And thus, AOL ceased to exist. Today some of those acquisitions are part of Verizon Media and others like Tumblr were ruined by mismanagement and corporate infighting. 

Many of the early ideas paved the way for future companies. AOL Local can be seen in companies like Yelp. AOL Video is similar to what became YouTube or TikTok. Or streaming media like Netflix and Hulu. AOL Instant Messenger in What’s App. XDrive in Google Drive. AOL News in CNN, Apple News, Fox News, etc. We now live in an App-driven world where each of these can be a new app coming around every year or two and then fading into the background as the services are acquired by an Amazon, Google, Apple, or Facebook and then fade off into the sunset only to have others see the billions of dollars paid as a reason to put their own spin on the concept. 

Steve Case runs an investment firm now. He clearly had a vision for the future of the Internet and did well off that. And his book The Third Wave lays out the concept that rather than try and build all the stuff a company like AOL did, that companies would partner with one another. While that sounds like a great strategy, we do keep seeing acquisitions over partnerships. Because otherwise it’s hard to communicate priorities through all the management layers of a larger company. He talked about perseverance, like how Uber and Airbnb would punch through the policies of regulators. I suspect what we are seeing by being sent home due to COVID will propel a lot of technology 5-10 years in adoption and force that issue. 

But I think the most interesting aspect of that book to me was when he talked about R&D spending in the US. He made a lot of money at AOL by riding the first wave of the Internet. And that began far before him, when the ARPANet was formed in 1969. R&D spending has dropped to the lowest point since 1950, due to a lot of factors, not least of which is the end of the Cold War. And we’re starting to see the drying up of the ideas and innovations that came out of that period transition heavily regulated. 

So think about this. AOL made a lot of money by making it really, really easy to get online and then on the Internet. They truly helped to change the world by taking R&D that the government instigated in the 70s and giving everyday people, not computer scientists, access to it. They built communities around it and later diversified when the tides were changing. What R&D from 5 to 20 years ago that could truly be beneficial to humanity today hasn’t made it into homes across the world - and of that what can we help to proliferate?

Thank you for joining us for this episode of the History of Computing Podcast. We are so lucky to have you and we are so lucky to make use of the innovations you might be bringing us in the future. Whether those are net-new technologies, or just making that research available to all. Have a great day. 

Bill Gates Essay: Content Is King


Today we’re going to cover an essay Bill Gates wrote in 1996, a year and change after his infamous Internet Tidal Wave memo, called Content is King, a term that has now become ubiquitous. It’s a bit long but perfectly explains the Internet business model until such time as there was so much content that the business model had to change. 

See, once anyone could produce content and host it for free, like in the era of Blogger, the model flipped. So here goes: 

“Content is where I expect much of the real money will be made on the Internet, just as it was in broadcasting.

The television revolution that began half a century ago spawned a number of industries, including the manufacturing of TV sets, but the long-term winners were those who used the medium to deliver information and entertainment.

When it comes to an interactive network such as the Internet, the definition of “content” becomes very wide. For example, computer software is a form of content-an extremely important one, and the one that for Microsoft will remain by far the most important.

But the broad opportunities for most companies involve supplying information or entertainment. No company is too small to participate.

One of the exciting things about the Internet is that anyone with a PC and a modem can publish whatever content they can create. In a sense, the Internet is the multimedia equivalent of the photocopier. It allows material to be duplicated at low cost, no matter the size of the audience.

The Internet also allows information to be distributed worldwide at basically zero marginal cost to the publisher. Opportunities are remarkable, and many companies are laying plans to create content for the Internet.

For example, the television network NBC and Microsoft recently agreed to enter the interactive news business together. Our companies will jointly own a cable news network, MSNBC, and an interactive news service on the Internet. NBC will maintain editorial control over the joint venture.

I expect societies will see intense competition-and ample failure as well as success-in all categories of popular content-not just software and news, but also games, entertainment, sports programming, directories, classified advertising, and on-line communities devoted to major interests.

Printed magazines have readerships that share common interests. It’s easy to imagine these communities being served by electronic online editions.

But to be successful online, a magazine can’t just take what it has in print and move it to the electronic realm. There isn’t enough depth or interactivity in print content to overcome the drawbacks of the online medium.

If people are to be expected to put up with turning on a computer to read a screen, they must be rewarded with deep and extremely up-to-date information that they can explore at will. They need to have audio, and possibly video. They need an opportunity for personal involvement that goes far beyond that offered through the letters-to-the-editor pages of print magazines.

A question on many minds is how often the same company that serves an interest group in print will succeed in serving it online. Even the very future of certain printed magazines is called into question by the Internet.

For example, the Internet is already revolutionizing the exchange of specialized scientific information. Printed scientific journals tend to have small circulations, making them high-priced. University libraries are a big part of the market. It’s been an awkward, slow, expensive way to distribute information to a specialized audience, but there hasn’t been an alternative.

Now some researchers are beginning to use the Internet to publish scientific findings. The practice challenges the future of some venerable printed journals.

Over time, the breadth of information on the Internet will be enormous, which will make it compelling. Although the gold rush atmosphere today is primarily confined to the United States, I expect it to sweep the world as communications costs come down and a critical mass of localized content becomes available in different countries.

For the Internet to thrive, content providers must be paid for their work. The long-term prospects are good, but I expect a lot of disappointment in the short-term as content companies struggle to make money through advertising or subscriptions. It isn’t working yet, and it may not for some time.

So far, at least, most of the money and effort put into interactive publishing is little more than a labor of love, or an effort to help promote products sold in the non-electronic world. Often these efforts are based on the belief that over time someone will figure out how to get revenue.

In the long run, advertising is promising. An advantage of interactive advertising is that an initial message needs only to attract attention rather than convey much information. A user can click on the ad to get additional information-and an advertiser can measure whether people are doing so.

But today the amount of subscription revenue or advertising revenue realized on the Internet is near zero-maybe $20 million or $30 million in total. Advertisers are always a little reluctant about a new medium, and the Internet is certainly new and different.

Some reluctance on the part of advertisers may be justified, because many Internet users are less-than-thrilled about seeing advertising. One reason is that many advertisers use big images that take a long time to download across a telephone dial-up connection. A magazine ad takes up space too, but a reader can flip a printed page rapidly.

As connections to the Internet get faster, the annoyance of waiting for an advertisement to load will diminish and then disappear. But that’s a few years off.

Some content companies are experimenting with subscriptions, often with the lure of some free content. It’s tricky, though, because as soon as an electronic community charges a subscription, the number of people who visit the site drops dramatically, reducing the value proposition to advertisers.

A major reason paying for content doesn’t work very well yet is that it’s not practical to charge small amounts. The cost and hassle of electronic transactions makes it impractical to charge less than a fairly high subscription rate.

But within a year the mechanisms will be in place that allow content providers to charge just a cent or a few cents for information. If you decide to visit a page that costs a nickel, you won’t be writing a check or getting a bill in the mail for a nickel. You’ll just click on what you want, knowing you’ll be charged a nickel on an aggregated basis.

This technology will liberate publishers to charge small amounts of money, in the hope of attracting wide audiences.

Those who succeed will propel the Internet forward as a marketplace of ideas, experiences, and products-a marketplace of content.”



Today we’re going to cover a computer programming language many might not have heard of, ALGOL. 

ALGOL was written in 1958. It wasn’t like many of the other languages in that it was built by committee. The Association for Computing Machinery and the German Society of Applied Mathematics and Mechanics were floating around ideas for a universal computer programming language. 

Members from the ACM were a who’s who of people influential in the transition from custom computers that were the size of small homes to mainframes. John Backus of IBM had written a programming language called Speedcoding and then Fortran. Joseph Wegstein had been involved in the development of COBOL. Alan Perlis had been involved in Whirlwind and was with the Carnegie Institute of Technology. Charles Katz had worked with Grace Hopper on UNIVAC and FLOW-MATIC. 

The Germans were equally as influential. Frederich Bauer had brought us the stack method while at the Technical University of Munich. Hermann Bottenbruch from The Institute for Applied Mathematics had written a paper on constructing languages. Klaus Samelson had worked on a computer called PERM that was similar to the MIT Whirlwind project. He’d come into computing while studying Eigenvalues. 

Heinz Ritishauser had written a number of papers on programming techniques and had codeveloped the language Superplan while at the The Swiss Federal Institute of Technology. This is where the meeting would be hosted.

They went from May 27th to June 2nd in 1958 and initially called the language they would develop as IAL, or the International Algebraic Language. But would expand the name to ALGOL, short for Algorithmic Language. They brought us code blocks, the concept that you have a pair of words or symbols that would begin and end a stanza of code, like begin and end. They introduced nested scoped functions. They wrote the whole language right there. You would name a variable by simply saying integer or setting the variable as a := 1. You would substantiate a for and define the steps to perform until - the root of what we would now call a for loop. You could read a variable in from a punch card. It had built-in SIN and COSIN. It was line based and fairly simple functional programming by today’s standards. They defined how to handle special characters, built boolean operators, floating point notation. It even had portable types. 

And by the end had a compiler that would run on the Z22 computer from Konrad Zuse. While some of Backus’ best work it effectively competed with FORTRAN and never really gained traction at IBM. But it influenced almost everything that happened afterwards. 

Languages were popping up all over the place and in order to bring in more programmers, they wanted a formalized way to allow languages to flourish, but with a standardized notation system so algorithms could be published and shared and developers could follow along with logic. One outcome of the ALGOL project was the Backus–Naur form, which was the first such standardization. That would be expanded by Danish Peter Naur for ALGOL 60, thus the name.

In ALGOL 60 they would meet in Paris, also adding Father John McCarthy, Julien Green, Bernard Vauquois, Adriaan van Wijngaarden, and Michael Woodger. It got refined, yet a bit more complicated. FORTRAN and COBOL use continued to rage on, but academics loved ALGOL.

And the original implementation now referred to as the ZMMD implementation, gave way to X1 ALGOL, Case ALGOL, ZAM in Poland, GOGOL, VALGOL, RegneCentralen ALGOL, Whetstone ALGOL for physics, Chinese ALGOL, ALGAMS, NU ALGOL out of Norway, ALGEK out of Russia,  Dartmouth ALGOL, DG/L, USS 90 Algol, Elliot ALGOL, the ALGOL Translator, Kidsgrove Algol, JOVIAL, Burroughs ALGOL, Niklaus Firths ALGOL W, which led to Pascal, MALGOL, and the last would be S-algol in 1979. 

But it got overly complicated and overly formal. Individual developers wanted more flexibility here and there. Some wanted simpler languages. Some needed more complicated languages. ALGOL didn’t disappear as much as it evolved into other languages. Those were coming out fast and with a committee to approve changes to ALGOL, they were much slower to iterate. 

You see, ALGOL profoundly shaped how we think of programming languages. That formalization was critical to paving the way for generations of developers who brought us future languages. ALGOL would end up being the parent of CPL and through CPL, BCPL, C, C++, and through that Objective-C. From ALGOL also sprang Simula and through Simula, Smalltalk. And Pascal and from there, Modula and Delphi. It was only used for a few years but it spawned so much of what developers use to build software today. 

In fact, other languages evolved as anti-ALGOL-derivitives, looking at how you did something and deciding to do it totally differently. 

And so we owe this crew our thanks. They helped to legitimize a new doctrine, a new career, computer programmer. They inspired. They coded. And in so doing, they helped bring us into the world of functional programming and set structures that allowed the the next generation of great thinkers to go even further, directly influencing people like Adele Goldberg and Alan Kay. 

And it’s okay that the name of this massive contribution is mostly lost to the annals of history. Because ultimately, the impact is not. So think about this - what can we do to help shape the world we live in? Whether it be through raw creation, iteration, standardization, or formalization - we all have a role to play in this world. I look forward to hearing more about yours as it evolves!

The Homebrew Computer Club


Today we’re going to cover the Homebrew Computer Club.

Gordon French and Fred More started the Homebrew Computer Club. French hosted the Home-brew Computer Club’s first meeting in his garage in Menlo Park, California on March 5th, 1975. I can’t help but wonder if they knew they were about to become the fuse the lit a powder keg? If they knew they would play a critical role in inspiring generations to go out and buy personal computers and  automate everything. If they knew they would inspire the next generation of Silicon Valley hackers? Heck, it’s hard to imagine they didn’t with everything going on at the time. Hunter S Thompson rolling around deranged, Patty Hearst robbing banks in the area, the new 6800 and 8008 chips shipping…

Within a couple of weeks they were printing a newsletter. I hear no leisure suits were damaged in the making of it. The club would meet in French’s garage three times until he moved to Baltimore to take a job with the Social Security Administration. The group would go on without him until late in 1986. By then, the club had played a substantial part in spawning companies like Cromemco, Osborne, and most famously, Apple.

The members of the club traded parts, ideas, rumors, and hacks. The first meeting was really all about checking out the Altair 8800, by an Albuquerque calculator company called MITS, which would fan the flames of the personal computer revolution by inspiring hackers all over the world to build their own devices. It was the end of an era of free love and free information. Thompson described it as a high water mark. Apple would help to end the concept of free, making its founders rich beyond their working-class dreams. 

A newsletter called the People’s Computer Company had gotten an early Altair. Bob Albrecht would later change the name of the publication to Dr Dobbs. That first, fateful meeting, inspired Deve Wozniak to start working on one of the most important computers of the PC revolution, the Apple I. They’d bounce around until they pretty much moved into Stanford for good. 

I love a classic swap meet, and after meetings, some members of the group would reconvene at a parking lot or a bar to trade parts. They traded ideas, concepts, stories, hacks, schematics, and even software. Which inspired Bill Gates to write his “Open Letter to Hobbyists” - which he sent to the club’s newsletter.  

Many of the best computer minds in the late 70s were members of this collective. 

  • George Morrow would make computers mostly through his company Morrow designs, for 30 years.
  • Jerry Lawson invented cartridge-based gaming. 
  • Lee Felsenstein built the SOL, a computer based on the Intel 8080, the Pennywhistle Modem, and designed the Osborne 1, the first real portable computer. He did that with Adam Osborne who he met at the club. 
  • Li-Chen Wang developed Palo Alto Tiny Basic.
  • Todd Fischer would help design the IMSAI.
  • Paul Terrell would create the Byte Shop, a popular store for hobbyists that bought the first 50 Apple 1 computers to help launch the company. It was also the only place to buy the Altair in the area. 
  • Dan Werthimer founded the SETI@home project. 
  • Roger Melen would found Cromemco with Harry Garland. They named the company after Crothers Memorial, the graduate student engineering dorm at Stanford. They built computers and peripherals for the Z80 and S-100 bus. They gave us the Cyclops digital camera, the JS-1 joystick, and the Dazzler color graphics interface - all for the Altair. They would then build the Z-1 computer, using the same chassis as the IMSAI, iterating new computers until 1987 when they sold to Dynatech. 
  • John Draper, also known as Captain Crunch, had become a famous phreaker in 1971, having figured out that a whistle from a box of Captain Crunch would mimic the 2600 hertz frequency used to route calls. His Blue Box design was then shared to Steve Wozniak who set up a business selling them with his buddy from high school, Steve Jobs. 
  • And of course, Steve Wozniak would design the Apple 1 using what he learned at the meetings and team up with his buddy Steve Jobs to create Apple Computer and launch the Apple I, which Woz wanted to give his schematics away for free and Jobs wanted to sell the boards. That led to the Apple II, which made both wealthy beyond their wildest imaginations and paved the way for the Mac and every innovation to come out of Apple since. 

Slowly the members left to pursue their various companies. When the club ended in 1986, the personal computing revolution had come and IBM was taking the industry over. A number of members continued to meet for decades, using the new name, the 6800 club, named after the Motorola 6800 chip, which had been used in the Altair on that fateful day in 1975. 

This small band of pirates and innovators changed the world. Their meetings produced the concepts and designs that would be used in computers from Atari, Texas Instruments, Apple, and every other major player in the original personal computing hobbyist market. The members would found companies that went public and inspired IBM to enter what had been a hobbyist market and turn it into a full fledged industry. They would democratize the computer and their counter-culture personalities would humanize computing and even steer computing to benefit humans in an era when computers were considered part of the military industrial complex and so evil. 

They were open with one another, leading to faster sharing of ideas, faster innovation. Until suddenly they weren’t. And the higher water mark of open ideas was replaced with innovation that was financially motivated. They capitalized on a recession in chips as war efforts spun down. And they changed the world. And for that, we thank them. And I think you listener, for tuning in to this episode of the history of computing podcast. We are so, so lucky to have you. Now tune in to innovation, drop out of binge watching, and go change the world. 

Konrad Zuse


Today we’re going to cover the complicated legacy of Konrad Zuse. 

Konrad Zuse is one of the biggest pioneers in early computing that relatively few have heard about. We tend to celebrate those who lived and worked in Allied countries in the World War II era. But Zuse had been born in Berlin in 1910. He worked in isolation during those early days, building his historic Z1 computer at 26 years old in his parents living room. It was 1936. 

That computer was a mechanical computer and he was really more of a guru when it came to mechanical and electromechanical computing. Mechanical computing was a lot like watch-making, with gears, and automations. There was art in it, and Zuse had been an artist early on in life. 

This was the first computer that really contained every part of what we would today think of a modern computer. It had a central processing control unit. It had memory. It had input through punched tape that could be used to program it. It even had floating point logic. It had an electric motor that ran at 1 hertz. 

This design would live inside future computers that he built, but was destroyed in 1943 during air raids, and would be lost to history until Zuse built a replica in 1989. 

He started building the Z2 in 1940. This used the same memory as the Z1 (64 words) but had 600 relays that allowed him to get up to 5 hertz. He’d also speed up calculations based on those relays, but the power required would jump up to a thousand watts. He would hand it over to the German DVL, now the German Aerospace Center. If there are Nazis on the moon, his computers likely put them there. 

And this is really where the German authorities stepped in and, as with in the US, began funding efforts in technological advancement. They saw the value of modeling all the maths on these behemoths. They ponied up the cash to build the Z3. And this turned out to ironically be the first Turing-complete computer. He’d continue 22-bit word lengths and run at 5 hertz. But this device would have 2,600 relays and would help to solve wing flutter problems and other complicated aerodynamic mathematical mysteries. The machine also used Boolean algebra, a concept brought into computing independently by Claude Shannon in the US. It was finished in 1941, two years before Tommy Flowers finished the Colossus and 1 year before the Atanasoff-Berry Computer was built. And 7 years before ENIAC. And this baby was fast. Those relays crunched multiplication problems in 3 seconds. Suddenly you could calculate square roots in no time. But the German war effort was more focused on mechanical computing and this breakthrough was never considered critical to the war effort. Still, it was destroyed by allied air raids, just as its younger siblings had been. 

The war had gone from 1939 to 1945, the year he married Gisela and his first child was born. He would finish building the Z4 days before the end of the war and met Alan Turing in 1947. He’d found Zuse KG in 1949. The Germans were emerging from a post-wartime depression and normalizing relations with the rest of Europe. The Z4 would finally go into production in Zurich in 1950. His team was now up to a couple dozen people and he was getting known. With electronics getting better and faster and better known, he was able to bring in specialists and with 2,500 relays - now 21 step-wise relays. - to get up to 40 hertz. And to under complicate something from a book I read, no Apple was not the first company to hook a keyboard up to a computer, the Zs did it in the 50s as they were now using a typewriter to help program the computer. OK, fine, ENIAC did it in 1946… But can you imagine hooking a keyboard up to a device rather than just tapping on the screen?!?! Archaic!

For two years, the Z4 was the only digital computer in all of Europe. But that was all about to change. They would refine the design and build the Z5, delivering it to Leitz GMBH in 1953. The Americans tried to recruit him to join their growing cache of computer scientists by sending Douglas Buck and others out. But he stayed on in Germany. 

They would tinker with the designs and by 1955 came the Z11, shipping in 1957. This would be the first computer they produced multiple of in an almost assembly line building 48 and gave them enough money to build their next big success, the Z22. This was his seventh and would use vacuum tubes. And actually had an ALGOL 58 compiler. If you can believe it, the  University of Applied Sciences, Karlsruhe still has one running! It added a rudimentary form of water cooling, teletype, drum memory, and core memory. They were now part of the computing mainstream. 

And in 1961 they would go transistorized with the Z23. Ferrite memory. 150 kilohertz, Algol 60. This was on par with anything being built in the world. Transistors and diodes. They’d sell nearly 100 of them over the next few years. They would even have Z25 and Z26 variants. The Z31 would ship in 1963. They would make it to the Z43.  But the company would run into financial problems and be sold to Siemens in 1967, who had gotten into computing in the 1950s. Being able to focus on something other than running a company prompted Zuse to write Calculating Space, effectively positing that the universe is a computational structure, now known as digital physics. He wasn’t weird, you’re weird. OK, he was… 

e was never a Nazi, but he did build machines that could have helped their effort.  You can trace the history of the mainframe era from gears to relays to tubes to transistors in his machines. IBM and other companies licensed his patents. And many advances were almost validated by him independently discovering them, like the use of Boolean algebra in computing. But to some degree he was a German in a lost era of history, often something that falls to the losers in a war. 

So Konrad Zuse, thank you for one of the few clean timelines. It was a fun romp. I hope you have a lovely place in history, however complicated it may be. And thank you listeners, for tuning in to this episode of the history of computing podcast. We are so lucky to have you stop by. I hope you have a lovely and quite uncomplicated day! 

The Atanasoff-Berry Computer


Today we’re going to cover the Atanasoff–Berry computer (ABC), the first real automatic electronic digital computer.

The Atanasoff-Berry Computer was the brainchild of John Vincent Atanasoff. He was a physics professor at Iowa State College at the time. And it’s like he was born to usher in the era of computers. His dad had emigrated to New York from Bulgaria, then a part of the Ottoman Empire, and moved to Florida after John was born. The fascination with electronics came early as his dad Ivan was an electrical engineer. And seeking to solve math problems with electronics - well, his mom Iva was a math teacher. He would get his bachelors from the University of Florida and go to Iowa State College to get his Masters. He’d end up at the University of Wisconsin to get his PhD before returning to Iowa State College to become a physics professor. 

But there was a problem with teaching physics. The students in Atanasoff’s physics courses took weeks to calculate equations, getting in the way of learning bigger concepts. So in 1934 he started working on ideas. Ideas like using binary algebra to compute tasks. Using those logic circuits to add and subtracted. Controlling clocks, using a separate memory from compute tasks, and parallel processing. By 1937 he’d developed the concept of a computer. Apparently many of the concepts came to him while driving late at night in the winter early in 1938. You know, things like functions and using vacuum tubes.

He spent the next year working out the mechanical elements required to compute his logic designs and wrote a grant in early 1939 to get $5,330 of funding to build the machine. The Research Corporation of New York City funded the project and by 1939 he pulled in a graduate student named Clifford Berry to help him build the computer. He had been impressed by Berry when introduced by another professor who was from the electrical engineering department, Harold Anderson. They got started to build a computer capable of solving linear equations in the basement of the physics building. By October of 1939 they demonstrated a prototype that had 11 tubes and sent their work off to patent attorneys at the behest of the university. 

One of the main contributions to computing was the concept of memory. Processing that data was done with vacuum tubes, 31 thyratrons, and a lot of wire. Separating processing from memory would mean taking an almost record player approach to storage. 

They employed a pair of drums that had 1600 capacitors in them and rotated, like a record player. Those capacitors were stored in 32 bands of 50 and because the drum rotated once per second, they could add or subtract 30 numbers per second. Thus, 50 bits. The concept of storing a binary bit of data and using binary logic to convert that into more of a zero or one was the second contribution to computing that persists today. 

The processing wasn’t a CPU as we’d think of it today but instead a number of logic gates that included inverters and input gates for two and three inputs. Each of these had an inverting vacuum tube amplifier and a resistor that defined the logical function. The device took input using decimals on standard IBM 80-column punched cards. It stored results in memory when further tasks were required and the logic operations couldn’t be handled in memory. Much as Atanasoff had done using a Monroe calculator hooked to an IBM tabulating machine when he was working on his dissertation. In many ways, the computer he was building was the next evolution from that just as ENIAC would be the next evolution after. Changing plugs or jumpers on the front panel was akin to programming the computer. Output was also decimal and provided using a display on the front panel.

The previous computers had been electro-mechanical. Gears and wires and coils that would look steampunk to many of us today. But in his paper Computing Machine For the Solution Of Large Systems of Linear Algebraic Equations (, Atanasoff had proposed a fully digital device, which they successfully tested in 1942. By then the computer had a mile of wire in it, weighed 700 pounds, had 280 vacuum tubes, and 31 thyratrons. 

The head of the Iowa State College Statistics Department was happy to provide problems to get solved. And so George W. Snedecor became the first user of a computer to solve a real problem. We have been fighting for the users ever since. But then came World War II. Both Atanasoff and Berry got called away to World War II duties and the work on the computer was abandoned. 

The first use of vacuum tubes to do digital computation was almost lost to history. But Mauchly, who built ENIAC would come later. ENIAC would build on many of the concepts and be programmable so many consider it to be the first real computer. But Atanasoff deserves credit for many of the concepts we still use today, albeit under the hood!

Most of the technology we have today didn’t exist at the time. They gave us what evolved into DRAM. And between them and ENIAC, was Konrad Zuse's Z3 and Colossus. So the ‘first computer” is a debatable topic. 

With the pioneers off to help win the war, the computer would go into relative obscurity. At least, until the computer business started to get huge and people didn’t want to pay Mauchly and Eckert to use their patent for a computer. Mauchly certainly would have known about the ABC since he saw it in 1941 and actually spent four days with Atanasoff. And there are too many parallels between them to say that some concepts weren’t borrowed. But that shouldn’t take anything away from any of the people involved. Because of Atanasoff, the patents were voided and IBM and other companies saved millions in royalties. ABC would be designated an official IEEE Milestone in 1990, 5 years before Atanasoff passed away. 

And so their contributions would be recognized eventually and those we can’t know about due to their decades in the defense industry are surely recognized by those who enable our freedoms in the US today. But not to the general public. But we thank them for their step in the evolution that got us where we are today. Just as I think you dear listener for tuning in to this episode of the history of computing podcast. We are so lucky to have you. 

The Evolution Of Wearables


Mark Weiser was the Chief Technologiest at the famed Xerox Palo Alto Research Center, or Xerox Parc in 1988 when he coined the term "ubiquitous computing.” Technology hadn’t entered every aspect of our lives at the time like it has now. The concept of wearable technology probably kicks off way earlier than you might think. 

Humans have long sought to augment ourselves with technology. This includes eyeglasses, which came along in 1286  and wearable clocks, an era kicked off with the Nuremberg eggs in 1510. The technology got smaller and more precise as our capacity at precision grew. Not all wearable technology is meant to be worn by humans. We strapped cameras to pigeons in 1907.

in the 15th century, Leonardo da Vinci would draw up plans for a pedometer and that concept would go on the shelf until Thomas Jefferson picked it back up during his tinkering days. And we would get an abacus ring in 1600. But computers began by needing a lot of electricity to light up those vacuum tubes to replace operations from an abacus, and so when the transistor came along in the 40s, we’d soon start looking for ways to augment our capabilities with those. 

Akio Morita and Masaru Ibuka began the wearable technology craze in 1953 when they started developing what would become the TR-55 when it was released in 1955. It was the first transistor radio and when they changed their name to Sony, they would introduce the first of their disruptive technologies. We don’t think of radios as technology as much as we once did, but they were certainly an integral part of getting the world ready to accept other technological advances to come!

Manfred Clynes came up with cyborgs in his story story called Cyborgs in Space in 1960. The next year, Edward Thorp and mathematician and binary algebra guru Claude Shannon wanted to try their hands at cheating at roulette so built a small computer to that timed when balls would land. It went in a shoe. created their own version of wearable technology – a computer small enough to fit into a shoe. This would stay a secret until Thorp released his book “Beat the Dealer” telling readers they got a 44 percent improvement in making bets. By 1969 though Seiko gave us the first automatic quartz watch. 

Other technologies were coming along at about the same time that would later revolutionize portable computing once they had time to percolate for awhile. Like in the 1960s, liquid crystal displayers were being researched at RCA. The technology goes back further but George H. Heilmeier from RCA laboratories gets credit for In 1964 for operationalizing LCD. 

And Hatano developed a mechanical pedometer to track progress to 10,000 steps a day, which by 1985 had him defining that as the number of steps a person should reach in a day. But back to electronics. 

Moore’s law. The digital camera traces its roots to 1975, but Kodak didn’t really pursue it. 1975 and devices were getting smaller and smaller. Another device we don’t think of as a computer all that much any more is a calculator. But kits were being sold by then and suddenly components had gotten small enough that you could get a calculator in your watch, initially introduced by Pulsar. And those radios were cool but what if you wanted to listen to what you wanted rather than the radio? Sony would again come along with another hit: The Walkman in 1979, selling over 200 million over the ensuing decade. Akio Morita was a genius, also bringing us digital hearing aids and putting wearables into healthcare. Can you imagine the healthcare industry without wearable technology today? 

You could do more and more and by 1981, Seiko would release the UC 2000 Wrist PC. By then portable computers were a thing. But not wearables. You could put 2 whopping kilobytes of data on your wrist and use a keyboard that got strapped to an arm. Computer watches continued to improve any by 1984 you could play. Games on them, like on the Nelsonic Space Attacker Watch. 

Flash memory arguably came along in 1984 and would iterate and get better, providing many, many more uses for tiny devices and flash media cards by 1997. But those calculator watches, Marty McFly would sport one in 1985s Back To The Future and by the time I was in high school they were so cheap you could get them for $10 at the local drug store. And a few years later, Nintendo would release the Power Glove in 1989, sparking the imagination of many a nerdy kid who would later build actually functional technology. Which regrettably the Power Glove was not. 

The first portable MP3 player came along in 1998. It was the MPMan. Prototypes had come along in 1979 with the IXI digital audio player. The audible player, Diamond Rio, and Personal Jukebox came along in 1998 and on the heels of their success the NOMAX Jukebox came in y2k. But the Apple iPod exploded onto the scene in 2001 and suddenly the Walkman and Diskman were dead and the era of having a library of music on mainstream humans was upon us, sparking Microsoft to release the Zen in 2004, and the Zune in 2006. 

And those watches. Garmin brought us their first portable GPS in 1990, which continues to be one of the best such devices on the market.

The webcam would come along in 1994 when Canadian researcher Steve Mann built the first the wearable wireless webcam. That was the spark that led to the era of the Internet of Things. Suddenly we weren’t just wearing computers. We were wearing computers connected to the inter webs. 

All of these technologies brought to us over the years… They were converging. Bluetooth was invented in 2000. 

By. 2006, it was time for the iPod and fitness tracking to converge. Nike+iPod was announced and Nike would release a small transmitter that. Fit into a notch in certain shoes. I’ve always been a runner and jumped on that immediately! You needed a receiver at the time for an iPod Nano. Sign me up, said my 2006 self! I hadn’t been into the cost of the Garmin but soon I was tracking everything. Later I’d get an iPhone and just have it connect. But it was always a little wonky. Then came The Nike+ Fuelband in 2012. I immediately jumped on that bandwagon as well. You. Had to plug it in at first but eventually a model came out that sync’d over bluetooth and life got better. I would sport that thing until it got killed off in 2014 and a little beyond… Turns out Nike knew about Apple coming into their market and between Apple, Fitbit, and Android Wear, they just didn’t want to compete in a blue ocean, no matter how big the ocean would be.  

Speaking of Fitbit, they were founded in 2007 James Park and Eric Friedman with a goal of bringing fitness trackers to market. And they capitalized on an exploding market for tracking fitness. But it wasn’t until the era of the app that they achieved massive success and in 2014 they released apps for iOS, Android and Windows Mobile, which was still a thing. And the watch and mobile device came together in 2017 when they released their smartwatch. They are now the 5th largest wearables company. 

Android Wear had been announced at Google I/O in 2014. Now called Wear OS, it’s a fork of Android Lollipop, that pairs with Android devices and integrates with the Google Assistant. It can connect over Bluetooth, Wi-Fi, and LTE and powers the Moto 360, the LG G and Samsung Gear. And there are a dozen other manufacturers that leverage the OS in some way, now with over 50 million installations of the apps. It can use Hangouts, and leverages voice to do everything from checking into Foursquare to dictating notes. 

But the crown jewel in the smart watches is definitely the Apple Watch. That came out of hiring former Adobe CTO Kevin Lynch to bring a Siri-powered watch to market, which happened in 2015. With over 33 million being sold and as of this recording on the 5th series of the watch, it can now connect over LTE, Wifi, or through a phone using Bluetooth. There are apps, complications, and a lot of sensors on these things, giving them almost limitless uses.

Those glasses from 1286. Well, they got a boost in 2013 when Google put images on them. Long a desire from science fiction, Google Glass brought us into the era of a heads up display. But Sega had introduced their virtual reality headset in 1991 and the technology actually dates back to the 70s from JPL and MIT. Nintendo experimented with Virtual boy in 1994. Apple released QuickTime VR shortly thereafter, but it wasn’t that great. I even remember some VGA “VR” headsets in the early 2000s, but they weren’t that great. It wasn’t until the Oculus Rift came along in 2012 that VR seemed all that ready. These days, that’s become the gold standard in VR headsets. The sign to the market was when Facebook bought Oculus for $2.3 billion dollars in 2014 and the market has steadily grown ever since. 

Given all of these things that came along in 2014, I guess it did deserve the moniker “The Year of Wearable Technology.” And with a few years to mature, now you can get wearable sensors that are built into yoga pants, like the Nadi X Yoga Pants, smartwatches ranging from just a few dollars to hundreds or thousands from a variety of vendors, sleep trackers, posture trackers, sensors in everything bringing a convergence between the automated home and wearables in the internet of things. Wearable cameras like the Go Pro, smart glasses from dozens of vendors, VR headsets from dozens of vendors, smart gloves, wearable onesies, sports clothing to help measure and improve performance, smart shoes, smart gloves, and even an Alexa enabled ring. 

Apple waited pretty late to come out with bluetooth headphones, releasing AirPods in 2016. These bring sensors into the ear, the main reason I think of them as wearables where I didn’t think of a lot of devices that came before them in that way. Now on their second generation, they are some of the best headphones you can buy. And the market seems poised to just keep growing. Especially as we get more and more sensors and more and more transistors packed into the tiniest of spaces. It truly is ubiquitous computing. 

The Rise of Netflix


Today we’re going to cover what many of you do with your evenings: Netflix.

Now, the story of Netflix comes in a few stages that I like to call the founding and pivot, the Blockbuster killer,  the streaming evolution, and where we are today: the new era of content. Today Netflix sits at more than a 187 billion dollar market cap. And they have become one of the best known brands in the world. But this story has some pretty stellar layers to it. And one of the most important in an era of eroding (or straight up excavated) consumer confidence is this thought. The IPOs that the dot com buildup created made fast millionaires. But those from the Web 2.0 era made billionaires. And you can see that in the successes of Netflix CEO Reed Hastings.


Hastings founded Pure Software in 1991. They made software that helped other people make… software. They went public in 1995 and merged with Atria, and were acquired the next year by Rational Software - making he and Netflix founder Marc Randolph, well, obsolete. Hastings made investors and himself a lot of money. Which at that point was millions and millions of dollars. So he went on to sit on the State Board of Education and get involved in education.

Act I: The Founding and Pivot

He and Marc Randolph had carpooled to worked while at Pure Atria and had tossed around a lot of ideas for startups. Randolph landed on renting DVDs by mail. Using the still somewhat new Internet. Randolph would become CEO and Hastings would invest the money to get started. Randolph brought in a talented team from Pure Atria and they got to work using an initial investment of two and a half million dollars in 1997. 

But taking the brick and mortar concept that video stores had been successfully using wasn’t working. They had figured out how to ship DVDs cheaply, how to sell them (until Amazon basically took that part of the business away), and even how to market the service by inking deals with DVD player manufacturers. The video stores had been slow to adopt DVDs after the disaster they found with laser disk and so the people who made the DVDs saw it as a way to get more people to buy the players. And it was mostly working. But the retention numbers sucked and they were losing money. 

So they tinkered with the business model, relentlessly testing every idea. And Hastings came back to take the role of CEO and Randolph stepped into the role of president. One of those tests had been to pivot from renting DVDs to a subscription model. And it worked. They gave customers a free month trial. The subscription and the trial are now all too common. But at the time it was a wildly innovative approach. And people loved it. Especially those who could get a DVD the next day. They also gave Netflix huge word of mouth. In 1999 they were at 110,000 subscribers. Which is how I first got introduced to them in 2000, when they were finally up to 300,000 subscribers. I had no clue, but they were already thinking about streaming all the way back then. 

But they had to survive this era. And as is often the case when there’s a free month that comes at a steep cost, Netflix was bleeding money. And running out of cash. They planned to go IPO. But because the dot com bubble had burst, cash was becoming hard to come by. They had been well funded, taking a hundred million dollars by the time they got to a series E. And they were poised for greatness. But there was that cash crunch. And a big company to contend with: Blockbuster. With 9,000 stores, $6b in revenue, tens of thousands of employees, and millions of rentals being processed a month, Blockbuster was the king of the video rental market. 

The story goes that Hastings got the Netflix idea from a late fee. So they would do subscriptions. But they had sold DVDs and done rentals first. And really, they found success because of the pivot, wherever that pivot came from. And in fact, Hastings and Randolph had flown to Texas to try and sell Netflix to Blockbuster. Pretty sure Blockbuster wishes they’d jumped on that. 

Which brings us to Act II: The Blockbuster Killer. 

Managing to keep enough cash to make it through the growth, they managed to go public in 2002 and finally got profitable in 2003. Soon they would be shipping over a million DVDs every single day. They quickly rose through word of mouth. That one day shipping was certainly a thing. They pumped money into advertising and marketing. And they continued a meteoric growth. 

They employed growth hacks and they researched a lot of options for the future, knowing that technology changes were afoot. Randolf investigated opening kiosks with Mitch Lowe. Netflix wouldn’t really be interested in doing so, and Randolph would leave the company in 2002 on good terms. Wealthy after the companies successful IPO. And Lowe took the Video Droid concept of a VHS rental vending machine to DVDs after Netflix abandoned it, and went to Redbox, which had been initially started by McDonalds in 2003. Many of the ideas he and Randolf tested in Vegas as a part of Netflix would be used and by 2005 Redbox would try to sell to Netflix and Blockbuster. 

But again, Blockbuster failed to modernize. They didn’t have just one shot at buying Netflix, Reed Hastings flew out there four times to try and sell the company to Blockbuster. Blockbuster launched their own subscription service in 2004 but it was flawed and there was bad press around late fees and other silly missteps. Meanwhile Netflix was growing fast. 

Netflix shipped the billionth DVD in 2007. And by 2007, there were more Reboxes than Blockbusters and by 2011 the kiosks accounted for half of the rental market. Blockbuster was finally forced to file for bankruptcy in 2010, after being a major name brand for 25 years. 

Netflix was modernizing though. Not with Kiosks but they were already beginning to plan for streaming. And a key to their success, as in the early days was relentless self improvement and testing every little thing, all the time. They took their time and did it right. 

Broadband was on the rise. People had more bandwidth and were experimenting with streaming music at work. Netflix posted earnings of over a hundred million dollars in 2009. But they were about to do something special. 

And so Act III: The Streaming Revolution

The streaming world came online in the early days of the Internet when Severe Tire Damage streamed the first song out of Xerox PARC in 1993. But it wasn’t really until YouTube came along in 2005 that streaming video was getting viable. By 2006 Google would acquire YouTube, which was struggling with over a million dollars a month in bandwidth fees and huge legal issues with copywritten content. This was a signal to the world that streaming was ready. I mean, Saturday Night Live was in, so it must be real! 

Netflix first experimented with making their own content in 2006 with a film production division they called Red Envelope Films. They made over a dozen movies but ultimately shut down, giving Netflix a little focus on another initiative before they came back to making their own content.

Netflix would finally launch streaming media in 2007, right around the time they shipped that billionth DVD. This was the same year Hulu launched out of AOL, Comcast, Facebook, MSN, and Yahoo. But Netflix had a card up it’s sleeve. Or a House of Cards, the first show they produced, which launched in 2013. Suddenly, Netflix was much, much more than a DVD service. They were streaming movies, and creating content. Wildly popular content. They’ve produced hundreds of shows now in well over a dozen languages. 2013 also brought us Orange is the New Black, another huge success. They started off with a whole Marvel universe in 2015 with Daredevil, followed by Jessica Jones, Luke Cage, Iron Fist, and tied that up with The Defenders. But along the way we got The Crown, Narcos and the almost iconic at this point Stranger Things. Not to mention Bojack Horseman, Voltron, and the list just goes on and on. 

That era of expansion would include more than just streaming. They would finally expand into Canada in 2010, finally going international. They would hit 20 million subscribers in 2011. By 2012 they would be over 25 million subscribers. By 2013 they would exceed 33 million. In 2014 they hit 50 million. By the end of 2015 they were at almost 70 million. 2016 was huge, as they announced an expansion into 130 new international territories at CES. And the growth continued. Explosively. At this point, despite competition popping up everywhere Netflix does over 20 billion a year in revenue and has been as instrumental in revolutionizing the world as anyone. 

That competition now includes Disney Plus, Apple, Hulu, Google, and thousands of thousands of podcasts and home spun streamers, even on Twitch. All battling to produce the most polarizing, touching, beautiful, terrifying, or mesmerizing content. 

Oh and there’s still regular tv I guess… 


So Y2K. The dot com bubble burst. And the overnight millionaires were about to give way to something new. Something different. Something on an entirely different scale. 

As with many of the pre-crash dot com companies, Netflix had initially begun with a pretty simple idea. Take the video store concept, where you payed per-rental. And take it out of brick and mortar and onto the internets. And if they had stuck with that, we probably wouldn’t know who they are today. We would probably be getting our content from a blue and yellow box called Blockbuster. But they went far beyond that, and in the process, they changed how we think of that model. And that subscription model is how you now pay for almost everything, including software like Microsoft Office. 

And Netflix continued to innovate. They made streaming media mainstream. They made producing content a natural adjacency to a streaming service. And they let millions cut the cord from cable and get into traditional media. They became a poster child for the fact that out of the dot com bubble and Great Recession, big tech companies would go from making fast millionaires to a different scale, fast billionaires!

As we move into a new post COVID-19 era, a new round of change is about to come. Nationalism is regrettably becoming more of a thing. Further automation and adoptions of new currencies may start to disrupt existing models even further. We have so much content we have to rethink how search works. And our interpersonal relationships will be forever changed from these months in isolation. Many companies are about to go the way of Blockbuster. Including plenty that have been around much, much longer than they were. But luckily, companies like Netflix are there for us to remind us that any company can innovate like in a multi-act play. 

And we owe them our thanks, for that. - and because what the heck else would we do stuck in quarantine, right?!?! So to the nearly 9,000 people that work at Netflix we 167 million plus subscribers thank you. For revolutionizing content distribution, revolutionizing business models, and for the machine learning and other technological advancements we didn’t even cover in this episode. You are lovely. 

And thank you listeners, for abandoning binge watching Tiger King long enough to listen to this episode of the History of Computing Podcast. We are so lucky to have you. Now get back to it!


Piecing Together Microsoft Office


Today we’re going to cover the software that would become Microsoft Office. 

Microsoft Office was announced at COMDEX in 1988. The Suite contained Word, Excel, and PowerPoint. These are still the core applications included in Microsoft Office. But the history of Office didn’t start there. 

Many of the innovations we use today began life at Xerox. And Word is no different. Microsoft Word began life as as Multi-Tool Word in 1981, when Charles Simonyi was hired away from Xerox PARC where he had worked on one of the earlier word processors, Bravo. 

He brought in Richard Brodie, and by 1983, they would release it for DOS, simplifying the name to just Microsoft Word. They would port it to the Mac in 1985, shortly after the release of the iconic 1984 Macintosh. Being way more feature-rich than MacWrite, it was an instant success. 2.0 would come along in 1987, and they would be up to 5 by 1992. But Word for Windows came along in 1989, when Windows 3.0 dropped. So Word went from DOS to Mac to Windows. 

Excel has a similar history. It began life as Multiplan in 1982 though. At the time, it was popular on CP/M and DOS but when Lotus 1-2-3 came along, it knocked everything out of the hearts and minds of users and Microsoft regrouped. Doug Klunder would be the Excel lead developer and Jabe Blumenthal would act as program manager. They would meet with Bill Gates and Simonyi and hammer out the look and feel and released Excel for the Mac in 1985. And Excel came to Windows in 1987. By Excel 5 in 1993, Microsoft would completely taken the spreadsheet market and suddenly Visual Basic for Applications (VBA) would play a huge role in automating tasks. Regrettably, then came macro viruses, but for more on those check out the episode on viruses. In fact, along the way, Microsoft would pick up a ton of talented developers including Bob Frankton a co-creator of the original spreadsheet, VisiCalc.

Powerpoint was an acquisition. It began life as Presenter at Forethought, a startup, in 1983. And Robert Gaskins, a former research manager  from Bell Norther Research, would be brought in to get the product running on Windows 1. It would become PowerPoint when it was released for the Mac in 1987 and was wildly successful, selling out all of the copies from the first run. 

But then Jeff Raikes from Microsoft started getting ready to build a new presentation tool. Bill Gates had initially thought it was a bad idea but eventually gave Raikes the go-ahead to buy Forethought and Microsoft PowerPoint was born. 

And that catches up to that fateful day in 1988 when Bill Gates announced Office at COMDEX in Las Vegas, which at the time was a huge conference.

Then came the Internet. Microsoft Mail was released for the Mac in 1988 and bundled with Windows from 1991 and on. Microsoft also released a tool called Inbox. But then came Exchange, expanding beyond mail and into contacts, calendars, and eventually much more. Mail was really basic and for Exchange, Microsoft released Outlook, which was added to Office 97 and an installer was bundled with Windows Exchange Server. 

Office Professional in that era included a database utility called Access. We’ve always had databases. But desktop databases had been dominated by Borland’s dBase and FoxPro up until 1992 when Microsoft Access began to chip away at their marketshare. Microsoft had been trying to get into that market since the mid-90s with R:Base and Omega, but when Access 2 dropped in 1994, people started to take notice and by the release of Office 95 Professional it could be purchased as part of a suite and integrated cleanly. I can still remember those mdb files and setting up data access objects and later ActiveX controls!

So the core Office components came together in 1988 and by 1995 the Office Suite was the dominant productivity suite on the market. It got better in 97. Except The Office Assistant, designed by Kevan Atteberry and lovingly referred to as Clippy. By 2000 Office became the de facto standard. Everything else had to integrate with Office. That continued in the major 2003 and 2007 releases. And the products just iterated to become better and better software. 

And they continue to do that. But another major shift was on the way. A response to Google Apps, which had been released in 2006. The cloud was becoming a thing. And so Office 365 went into beta in 2010 and was launched in 2011. It includes the original suite, OneDrive, SharePoint, Teams for chatting with coworkers, Yammer for social networking, Skype for Business (although video can now be done in Teams), Outlook and Outlook online, and Publisher. As well as Publisher, InfoPath, and Access for Windows. 

This Software + Services approach turned out to be a master-stroke. Microsoft was able to finally raise prices and earned well over a 10% boost to the Office segment in just a few years. The pricing for subscriptions over the term of what would have been a perpetual license was often 30% more. Yet, the Office 365 subscriptions kept getting more and more cool stuff. And by 2017 the subscriptions captured more revenue than the perpetual licenses. And a number of other services can be included with Office 365. 

Another huge impact is the rapid disappearing act of on premises Exchange servers. Once upon a time small businesses would have an Exchange server and then as they grew, move that to a colocation facility, hire MCSE engineers (like me) to run them, and have an amplified cost increase in dealing with providing groupware. Moving that to Microsoft means that Microsoft can charge more, and the customer can get a net savings, even though the subscriptions cost more - because they don’t have to pay people to run those servers. OneDrive moves files off old filers, etc. 

And the Office apps provided aren’t just for Windows and Mac. Pocket Office would come in 1996, for Windows CE. Microsoft would have Office apps for all of their mobile operating systems. And in 2009 we would get Office for Symbian. And then for iPhone in 2013 and iPad in 2014. Then for Android in 2015. 

Today over 1 and a quarter billion people use Microsoft Office. In fact, not a lot of people have *not* used Office. Microsoft has undergone a resurgence in recent years and is more nimble and friendly than ever before. Many of the people that created these tools are still at Microsoft. Simonyi left Microsoft for a time. But they ended up buying his company later. During what we now refer to as the “lost decade” at Microsoft, I would always think of these humans. Microsoft would get dragged through the mud for this or that. But the engineers kept making software. And I’m really glad to see them back making world class APIs that do what we need them to do. And building good software on top of that. 

But most importantly, they set the standard for what a word processor, spreadsheet, and presentation tool would look like for a generation. And the ubiquity the software obtained allowed for massive leaps in adoption and innovation. Until it didn’t. That’s when Google Apps came along, giving Microsoft a kick in the keister to put up or shut up. And boy did Microsoft answer. 

So thank you to all of them. I probably never would have written my first book without their contributions to computing. And thank you listener, for tuning in, to this episode of the history of computing podcast. We are so lucky to have you. Have a great day. 

500 Years Of Electricity


Today we’re going to review the innovations in electricity that led to the modern era of computing. 

As is often the case, things we knew as humans, once backed up with science, became much, much more. Electricity is a concept that has taken hundreds of years to really take shape and be harnessed. And whether having done so is a good thing for humanity, we can only hope. 

We’ll take this story back to 1600. Early scientists were studying positive and negative elements and forming an understanding that electricity flowed between them. Like the English natural scientist, William Gilbert  - who first established some of the basics of electricity and magnetism in his seminal work De Magnete, published in 1600, when he coined the term electricity. There were others but the next jump in understanding didn’t come until the time of Sir Thomas Browne, who along with other scientists of the day continued to refine theories. He was important because he documented where the scientific revolution was in his 1646 Pseudodoxia Epidemica. He codified that word electricity. And computer by the way. 

And electricity would be debated for a hundred years and tinkered with in scientific societies, before the next major innovations would come. Then another British scientist, Peter Collinson, sent Benjamin Franklin an electricity tube, which these previous experiments had begun to produce. 

Benjamin Franklin spent some time writing back and forth with Collinson and flew a kite and proved that electrical currents flowed through a kite string and that a metal key was used to conduct that electricity. This proved that electricity was fluid. Linked capacitors came along in 1749. That was 1752 and Thomas-Francois Dalibard also proved the hypothesis using a large metal pole struck by lightning. 

James Watt was another inventor and scientist who was studying steam engines from the 1760s to the late 1790s. Watt used to quantify the rate of energy transfer, a unit to measure power. Today we often measure those watts in terms of megawatts. His work in engines would prove important for converting thermal into mechanical energy and producing electricity later. But not yet. 

1799, Alessandro Volta built a battery, the Volta Pile. We still refer to the resistance of an ohm when the current of an amp flows through it as a volt. Suddenly we were creating electricity from an electrochemical reaction. 

Humphry Davy took a battery and invented the “arc lamp.” By attaching a piece of carbon that glowed to it with wires.

Budding scientists continued to study electricity and refine the theories. And by the 1820s, Hans Christian Orsted proved that an electrical current creates a circular magnetic field when flowing through a wire. Humans were able to create electrical current and harness it from nature. Inspired by Orsted’s discoveries, André-Marie Ampère began to put math on what Orsted had observed. Ampére observed two parallel wires carrying electric currents attract and that they repeled each other, depending on the direction of the currents, the foundational principal of electrodynamics. He took electricity to an empirical place. He figured out how to measure electricity, and for that, the ampere is now the unit of measurement we use to track electric current.

In 1826 Georg Ohm defined the relationship between current, power, resistance, and voltage. This is now called “Ohms Law” and we still measure electrical resistance in ohms. 

Michael Faraday was working in electricity as well, starting with replicating a voltaic pile and he kinda’ got hooked. He got wind of Orsted’s discovery as well and he ended up building an electric motor. He studied electromagnetic rotation, and by. 1831 was able to generate electricity using what we now call the Faraday disk. He was the one that realized the link between the various forms of electricity and experimented with various currents and voltages to change outcomes. He also gave us the Faraday cage, Faraday constant, Faraday cup, Faraday's law of induction, Faraday's laws of electrolysis, the Faraday effect, Faraday paradox, Faraday rotator, Faraday wave, and the Faraday wheel. It’s no surprise that Einstein kept a picture of Faraday in his study. 

By 1835, Joseph Henry developed the electrical relay and we could send current over long distances. 

Then, in the 1840s, a brewer named James Joule had been fascinated by electricity since he was a kid. And he discovered the relationship between mechanical work and heat. And so the law of conservation of energy was born. Today, we still call a joule a unit of energy. He would also study the relationship between currents that flowed through resistors and how they let off heat, which we now call Joules first law. By the way, he also worked with Lord Kelvin to develop the Kelvin scale. 

1844, Samuel Morse gave us the electrical telegraph and Morse code. After a few years coming to terms with all of this innovation, JC Maxwell unified magnetism and electricity and gave us Maxwell’s Equations, which gave way to electric power, radios, television, and much, much more. 

By 1878 we knew more and more about electricity. The boom of telegraphs had sparked many a young inventor into action and by 1878 we saw the lightbulb and a lamp that could run off a generator. This led Thomas Edison to found Edison Light and Electric and continue to refine electric lighting. By 1882, Edison fired up the Pearl Street Power station and could light up 5,000 lights using direct current power. A hydroelectric station opened in Wisconsin the same year. The next year, Edison gave us the vacuum tube.

Tesla gave us the Tesla coil and therefore alternating current in 1883, making it more efficient to send electrical current to far away places. Tesla would go on to develop polyphase ac power and patent the generator to transformer to motor and light system we use today, which was bought by George Westinghouse. By 1893, Westinghouse would use aC power to light up the World’s Fair in Chicago, a turning point in the history of electricity. 

And from there, electricity spread fast. Humanity discovered all kinds of uses for it. 1908 gave us the vacuum and the washing machine. The air conditioner came in 1911 and 1913 brought the refrigerator. And it continued to spread.

By 1920, electricity was so important that it needed to be regulated in the US and the Federal Power Commission was created. By 1933, the Tennessee Valley Authority established a plan to built damns across the US to light cities. And by 1935 The Federal Power Act was enacted to regulate the impact of damns on waterways.

And in the history of computing, the story of electricity kinda’ ends with the advent of the transistor, in 1947. Which gave us modern computing. The transmission lines for the telegraph put people all over the world in touch with one another. The time saved with all these innovations gave us even more time to think about the next wave of innovation. And the US and other countries began to ramp up defense spending, which led to the rise of the computer. But none of it would have been possible without all of the contributions of all these people over the years. So thank you to them.

And thank you, listeners, for tuning in. We are so lucky to have you. Have a great day!

Y Combinator


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to look at Y Combinator. 

Here’s a fairly common startup story. After finishing his second book on Lisp, Paul Graham decides to found a company. He and Robert Morris start Viaweb in 1995, along with Trevor Blackwell. Some of the code came from Lisp - you know, like the books Graham had worked on. It was one of the earliest SaaS startups, which let users host online stores - similar to Shopify today. Viaweb had an investor named Julian Weber, who invested $10,000 in exchange for 10% of the company. Weber gave them invaluable advice. By 1998 they were acquired by Yahoo! for about $50 million in stock, which was a little shy of half a million shares. Viaweb would became the Yahoo Store. Both Graham and Morris have PhDs from Harvard.

Here’s where the story gets different. Graham would write a number of essays, establishing himself as an influencer of sorts. 2005 rolls around and Graham decides to start doing seed funding for startups, following the model that Weber had established with Viaweb. He gets the gang back together, hooking up with his Viaweb co-founders Robert Morris (the guy that wrote the Morris worm) and Trevor Blackwell, and adding girlfriend and future wife Jessica Livingston - and they create Y Combinator. 

Graham would pony up $100,000, Morris and Blackwell would each chip in $50,000 and they would start with $200,000 to invest in companies. Being Harvard alumni, it was called Cambridge Seed. And as is the case with many of the companies they invest in, the name would change quickly, to Y Combinator. 

They would hold their first session in Boston and called it the Summer Founders Program. And they got a great batch of startups! So they decided to do it again, this time in Mountain View, using space provided by Blackwell. This time, a lot more startups applied and they decided to run two a year, one in each location. And they had plenty of startups looking to attend. But why? 

There have always been venture capital firms. Well, not always, but ish. They invest in startups. And incubators had become more common in business since the 1950s. The incubators mostly focused on planning, launching, and growing a company. But accelerators we just starting to become a thing, with the first one maybe being Colorado Venture Centers in 2001. The concept of accelerators really took off because of Y Combinator though.

There have been incubators and accelerators for a long, long time. Y Combinator didn’t really create those categories. But they did change the investment philosophy of many. You see, Y Combinator is an investor and a school. But. They don’t provide office space to companies. They have an open application process. They invest in the ideas of founders they like. They don’t invest much. But they get equity in the company in return. They like hackers. People that know how to build software. People who have built companies and sold companies. People who can help budding entrepreneurs. 

Graham would launch Hacker News in 2007. Originally called Startup News, it’s a service like Reddit that was developed in a language Graham co-wrote called Arc. I guess Arc would be more a stripped down dialect of Lisp, built in Racket. He’d release Arc in 2008. I wonder why he prefers technical founders…

They look for technical founders. They look for doers. They look for great ideas, but they focus on the people behind the ideas. They coach on presentation skills, pitch decks, making products. They have a simple motto: “Make Something People Want”. And it works. By 2008 they were investing in 40 companies a year and running a program in Boston and another in Silicon Valley. It was getting to be a bit much so they dropped the Boston program and required founders who wanted to attend the program to move to the Bay Area for a couple of months. They added office hours to help their founders and by 2009 the word was out, Y Combinator was the thing every startup founder wanted to do. Sequoia Capital ponied up $2,000,000 and Y Combinator was able to grow to 60 investments a year. And it was working out really well. So Sequoia put in another $8,250,000 round. 

The program is a crash course in building a startup. They look to grow fast. They host weekly dinners that Graham used to cook. Often with guest speakers from the VC community or other entrepreneurs. They build towards Demo Day, where founders present to crowds of investors. 

It kept growing. It was an awesome idea but it took a lot of work. The more the word spread, the more investments like Yuri Milner wanted to help fun every company that graduated from Y Combinator. They added non profits in 2013 and continued to grow. By 2014, Graham stepped down as President and handed the reigns to Sam Altman. The amount they invested went up to $120,000. More investments required more leaders and others would come in to run various programs. Altman would step down in 2019.

They would experiment with some other ideas but in the end, the original concept was perfect. Several alumni would come back and contribute to the success of future startups. People from companies like and twitch. In fact, their cofounder Michel Seibel would recommend Y Combinator to the founders of Airbnb. He ran Y Combinator Core for a while. Many of the founders who had good exits have gone from starting companies to investing in companies.  

Y Combinator changed the way seed investments happen. By 2015, a third of startups got their Series A funding from accelerators. The combined valuation of the Y Combinator companies who could be surveyed is well over $150 billion dollars in market capitalization. Graduates include Airbnb, Stripe, Dropbox, Coinbase, DoorDash, Instacart, Reddit. Massive success has led to over 15,000 applicants for just a few spots. To better serve so many companies, they created a website called Startup School in 2017 and over 1,500 startups went through it in the first year alone. 

Y Combinator has been quite impactful in a lot of companies. More important than the valuations and name brands, graduates are building software people want. They’re iterating societal change, spurring innovation at a faster pace. They’re zeroing in on helping founders build what people want rather than just spinning their wheels and banging their heads against the wall trying to figure out why people aren’t buying what they’re selling. 

My favorite part of Y Combinator has been the types of founders they look for. They give $150,000 to mostly technical founders. And they get 7% of the company in exchange for that investment. And their message of finding the right product market fit has provided them with massive returns on their investments. At. This point they’ve helped over 2,000 companies by investing and countless others with the startup School and by promoting them on Hacker News. 

Not a lot of people can say they changed the world. But this crew did. And there’s a chance Airbnb, Doordash, Reddit, Stripe, Dropbox and countless others would have launched and succeeded, but we’re all better off for the thousands of companies who have gone through YC having done so. So thank you for helping us get there. 

And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so, so lucky to have you. Have a great day. 


From The Palm Pilot To The Treo


Today we’re going to look at the history of the Palm. 

It might be hard to remember at this point, but once upon a time, we didn’t all have mobile devices connected to the Internet. There was no Facebook and Grubhub. But in the 80s, computer scientists were starting to think about what ubiquitous computing would look like. We got the Psion and the HP Jaguar (which ran on DOS). But these seemed much more like really small laptops. And with tiny keyboards. 

General Magic spun out of Apple in 1990 but missed the mark. Other devices were continuing to hit the market, some running PenPoint from Go Corporation - but none really worked out. But former Intel, GRiD, and then Tandy employee Jeff Hawkins envisioned a personal digital assistant and created Palm Computing to create one in 1992. He had been interested in pen-based computing and worked with pattern recognition for handwriting at UC Berkeley. He asked Ed Colligan of Radius and Donna Dubinsky of Claris to join him. She would become CEO.

They worked with Casio and Tandy to release the Casio Zoomer in 1993. The Apple Newton came along in 1993 and partially due to processor speed and partially due to just immaturity in the market, both devices failed to resonate with the market. The Newton did better, but the General Magic ideas that had caught the imagination of the world were alive and well. HP Jaguars were using Palm’s synchronization software and so they were able to stay afloat. 

And so Hawkins got to work on new character recognition software. He got a tour of Xerox PARC, as did everyone else in computing and they saw Unistrokes, which had been developed by David Goldberg. Unistrokes resembled shorthand and required users to learn a new way of writing but proved much more effective. Hawkins went on to build Graffiti, based on that same concept and as Xerox patented the technology they would go into legal battles until Palm eventually settled for $22.5 million. 

More devices were coming every year and by 1995 Palm Computing was getting close to releasing a device. They had about $3 million dollars to play with. They would produce a device that had less buttons and so a larger screen size than other devices. It had the best handwriting technology on the market. It was the perfect size. Which Hawkins had made sure of by carrying around a block of wood in his pocket and to meetings to test it. Only problem is that they ran out of cash during the R&D and couldn’t take it to market. But they knew they hit the mark. 

The industry had been planning for a pen-based computing device for some time and US Robotics saw an opening. Palm ended up selling to US Robotics, who had made a bundle selling modems, for $44 million dollars. And they got folded into another acquisition, 3Com, which had been built by Bob Metcalfe, who co-invented Ethernet. US Robotics banked on Ethernet being the next wave. And they were right. But they also banked on pen computing. And were right again!

US Robotics launched the Palm Pilot 1000 with 128k of RAM and the Palm Pilot 5000 with 518k of RAM in 1996. This was the first device that actually hit the mark. People became obsessed with Graffiti. You connected it to the computer using a serial port to synchronize Notes, Contacts, and Calendars. It seems like such a small thing now, but it was huge then. They were an instant success. Everyone in computing knew something would come along, but they didn’t realize this was it. Until it was! HP, Ericsson, Sharp, NEC, Casio, Compaq, and Philips would all release handhelds but the Palm was the thing. 

By 1998 the three founders were done getting moved around and left, creating a new company to make a similar device, called Handspring. Apple continued to flounder in the space releasing the eMate and then the MessagePad. But the Handspring devices were eerily similar to the Palms. Both would get infrared, USB, and the Handspring Visor would even run Palm OS 3. But the founders had a vision for something more.

They would take Handspring public in 2000. 3Com would take Palm public in 2000. Only problem is the dot com bubble. Well, that and Research in Notion began to ship the Blackberry OS in 1999 and the next wave of devices began to chip away at the market share. Shares dropped over 90% and by 2002 Palm had to set up a subsidiary for the Palm OS.

But again, the crew at Handspring had something more in mind. They released the Tree in 2002. The Handspring Treo was, check this out, a smart phone. It could do email, SMS, voice calls. Over the years they would add a camera, GPS, MP3, and Wi-Fi. Basically what we all expect from a smartphone today. 

Handspring merged with Palm in 2003 and they released the Palm Tree 600. They merged back the company the OS had been spun out into, finally all merged back together in 2005. Meanwhile, Pilot pens had sued Palm and the devices were then just called Palm. We got a few, with the Palm V probably being the best, got a few new features, lots and lots of syncing problems, when new sync tools were added. 

Now that all of the parts of the company were back together, they started planning for a new OS, which they announced in 2009. And webOS was supposed to be huge. And they announced the Palm Pre, the killer next Smartphone. 

The only problem is that the iPhone had come along in 2007. And Android was released in 2008. Palm had the right idea. They just got sideswiped by Apple and Google. 

And they ran out of money. They were bought by Hewlett-Packard in 2010 for 1.2 billion dollars. Under new management the company was again split into parts, with WebOS never really taking off, the PRe 3 never really shipping, and TouchPads not actually being any good and ultimately ending in the CEO of HP getting fired (along with other things). Once Meg Whitman stepped in as CEO, WebOS was open sourced and the remaining assets sold off to LG Electronics to be used in Smart TVs. 

The Palm Pilot was the first successful handheld device. It gave us permission to think about more. The iPod came along in 2001, in a red ocean of crappy MP3 handheld devices. And over time it would get some of the features of the Palm. But I can still remember the day the iPhone came out and the few dozen people I knew with Treos cursing because they knew it was time to replace it. In the meantime Windows CE and other mobile operating systems had just pilfered market share away from Palm slowly. The founders invented something people truly loved. For awhile. And they had the right vision for the next thing that people would love. They just couldn’t keep up with the swell that would become the iPhone and Android, which now own pretty much the entire market. 

And so Palm is no more. But they certainly left a dent in the universe. And we owe them our thanks for that. Just as I owe you my thanks for tuning in to this episode of the history of computing podcast. We are so lucky to decided to listen in - you’re welcome back any time! Have a great day!

The History Of The Computer Modem


Today we’re going to look at the history of the dial-up computer modem. 

Modem stands for modulate/demodulate. That modulation is carying a property (like voice or computer bits) over a waveform.  Modems originally encoded voice data with frequency shift keys, but that was developed during World War II. The voices were encoded into digital tones. That system was called SIGSALY. But they called them vocoders at the time. 

They matured over the next 17 years. And then came the SAGE air defense system in 1958. Here, the modem was employed to connect bases, missile silos, and radars back to the central SAGE system. These were Bell 101 modems and ran at an amazing 110 baud. Bell Labs, as in AT&T.  

A baud is a unit of transmission that is equal to how many times a signal changes state per second. Each of those baud is equivalent to one bit per second. So that first modem was able to process data at 110 bits per second. This isn’t to say that baud is the same as bitrate. Early on it seemed to be but the algorithms sku the higher the numbers. 

So AT&T had developed the modem and after a few years they began to see commercial uses for it. So in 1962, they revved that 101 to become the Bell 103. Actually, 103A. This thing used newer technology and better encoding, so could run at 300 bits per second. Suddenly teletypes - or terminals, could connect to computers remotely. But ma’ Bell kept a tight leash on how they were used for those first few years. That, until 1968.

In 1968 came what is known as the Carterphone Decision. We owe a lot to the Carterfone. It bridged radio systems to telephone systems. And Ma Bell had been controlling what lives on their lines for a long time. The decision opened up what devices could be plugged into the phone system. And suddenly new innovations like fax machines and answering machines showed up in the world. 

And so in 1968, any device with an acoustic coupler could be hooked up to the phone system. And that Bell 103A would lead to others. By 1972, Stanford Research had spun out a device, Novation, and others. But the Vladic added full duplex and got speeds four times what the 103A worked at by employing duplexing and new frequencies. We were up to 1200 bits per second. 

The bit rate had jumped four-fold because, well, competition. Prices dropped and by the late 1970s microcomputers were showing up in homes. There was a modem for the S-100 Altair bus, the Apple II through a Z-80 SoftCard, and even for the Commodore PET. And people wanted to talk to one another. TCP had been developed in 1974 but at this point the most common way to communicate was to dial directly into bulletin board services. 

1981 was a pivotal year. A few things happened that were not yet connected at the time. The National Science Foundation created the Computer Science Network, or CSNET, which would result in NSFNET later, and when combined with the other nets, the Internet, replacing ARPANET. 

1981 also saw the release of the Commodore VIC-20 and TRS-80. This led to more and more computers in homes and more people wanting to connect with those online services. Later models would have modems.

1981 also saw the release of the Hayes Smartmodem. This was a physical box that connected to the computer of a serial port. The Smartmodem had a controller that recognized commands. And established the Hayes command set standard that would be used to connect to phone lines, allowing you to initiate a call, dial a number, answer a call, and hang up. Without lifting a handset and placing it on a modem. On the inside it was still 300-baud but the progress and innovations were speeding up. And it didn’t seem like a huge deal. 

The online services were starting to grow. The French Minitel service was released commercially in 1982. The first BBS that would become Fidonet showed up in 1983. Various encoding techniques started to come along and by 1984 you had the Trailblazer modem, at over 18,000 bits a second. But, this was for specific uses and combined 36 bit/second channels. 

The use of email started to increase and the needs for even more speed. We got the ability to connect two USRobotics modems in the mid-80s to run at 2400 bits per second. But Gottfried Ungerboeck would publish a paper defining a theory of information coding and add parity checking at about the time we got echo suppression. This allowed us to jump to 9600 bits in the late 80s. 

All of these vendors releasing all of this resulted in the v.21 standard in 1989 from the  ITU Telecommunication Standardization Sector (ITU-T). They’re the ones that ratify a lot of standards, like x.509 or MP4. Several other v dot standards would come along as well. 

The next jump came with the SupraFaXModem with Rockwell chips, which was released in 1992. And USRobotics brought us to 16,800 bits per second but with errors. But we got v.32 in 1991 to get to 14.4 - now we were talking in kilobits! Then 19.2 in 1993, 28.8 in 1994, 33.6 in 1996. By 1999 we got the last of the major updates, v.90 which got us to 56k. At this point, most homes in the US at least had computers and were going online. 

The same year, ANSI ratified ADSL, or Asymmetric Digital Subscriber Lines. Suddenly we were communicating in the megabits. And the dial-up modem began to be used a little less and less. In 2004 Multimedia over Coax Alliance was formed and cable modems became standard. The combination of DSL and cable modems has now all but removed the need for dial up modems. Given the pervasiveness of cell phones, today, as few as 20% of homes in the US have a phone line any more. We’ve moved on.

But the journey of the dial-up modem was a key contributor to us getting from a lot of disconnected computers to… The Internet as we know it today. So thank you to everyone involved, from Ma Bell, to Rockwell, to USRobotics, to Hayes, and so on. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky to have you. Have a great day. 

Cray Supercomputers


Today we’re going to talk through the history of Cray Computers.

And really, this is then a history of supercomputers during Seymour Cray’s life. If it’s not obvious by his name, he was the founder of Cray. But before we go there, let’s back up a bit and talk about some things that were classified for a long time. The post-World War II spending by the US government definitely leveled up the US computer industry. And defense was the name of the game in those early years. 


Once upon a time, the computer science community referred to the Minneapolis/St Paul area as the Land of 10,000 Top Secret Projects. And a lot of things ended up coming out of that. One of the most important in the history of computing though, was Engineering Research Associates, or ERA. They built highly specialized computers. Those made for breaking Soviet codes. 


Honeywell had been founded in Minneapolis and as with Vannevar Bush, had gone from thermostats to computers. Honeywell started pumping out the DATAmatic 1000 in 1957. There was a computer shipping and Honeywell was well situated to capitalize on the growing mainframe computer market. 


ERA had some problems because the owners were embroiled in Washington politics and so they were acquired by Sperry Rand, today’s Unisys, but at the time one of the larger mainframe developers and the progeny of both the Harvard Mark series and ENIAC series of mainframes. Only problem is that the Sperry Rand crew were making a bundle off Univacs and so didn’t put money into forward looking projects. 


The engineers knew that there were big changes coming in computing. And they wanted to be at the forefront. Who wouldn’t. But with Sperry Rand barely keeping up with orders they couldn’t focus on R&D the way many former ERA engineers wanted to. So many of the best and brightest minds from ERA founded Control Data Corporation, or CDC. And CDC built some serious computers that competed with everyone at the time. Because they had some seriously talented engineers. One, who had come over from ERA, was Seymour Cray. And he was a true visionary. 


And so you had IBM and their seven biggest competitors, known as Snow White and the Seven Dwarfs. Three of those dwarfs were doing a lot of R&D in Minneapolis (or at least the Minneapolis area). None are still based in the Twin Cities. But all three build ruggedized computers that could withstand nuclear blasts, corrosive elements, and anything you could throw at them. 


But old Seymour. He wanted to do something great. Cray had a vision of building the fastest computer in the world. And as luck would have it, transistors were getting cheaper by the day. They had initially been designed to use germanium but Seymour Cray worked to repackage those at CDC to be silicon and was able to pack enough in to make the CDC 6600 the fastest computer in the world in 1964. They had leapfrogged the industry and went to market, selling the machines like hotcakes. 


Now CDC would build one of the first real supercomputers in that 6600. And supercomputers are what Cray is known for today. But there’s a little more drama to get from CDC to Cray and then honestly from Cray to the other Crays that Seymour founded. CDC went into a big of a buying tornado as well. As with the Univacs, they couldn’t keep up with demand and so suddenly were focused too much on Development to look beyond fulfillment and shipping and into the Research part of R&D. Additionally shipping all those computers and competing with IBM was rough and CDC was having financial problems, so CEO William Norris wouldn’t let them redesign the 6600 from the ground up. 


But Cray saw massive parallel processing as the future, which is kinda’ what supercomputing really is at the end of the day, and was bitten by that bug. He wanted to keep building the fastest computers in the world. And he would get his wish. He finally left CDC in 1972 and founded Cray Research along with cofounding engineer Lester Davis. They went to Chippewa Falls Wisconsin.


It took him four years, but Cray shipped the Cray-1 in 1976, which became the best selling supercomputer in history (which means they sold more than 80 and less than a hundred). It was 80MhZ, or 200 gigaFLOPS. And that was vector processing. They would math faster by re-arranging the memory and registers to more intelligently process big amounts of data. He used Maxwell’s equations on his boards. He designed it all on paper. The first Cray-1 would ship to Los Alamos National Laboratory. The Cray-1 was 5 and a half tons, cost around $8 million dollars in 1976 money and the fact that they were the fastest computer in the world combined with the fact that they were space age looking gave Seymour Cray instant star status. 


The Cray-1 would soon get competition from the ILLIAC IV out of the University of Illinois, an ARPA project. So Cray got to work thinkin’. He liked to dig when he thought, and he tried to dig a tunnel under his house. This kinda’ sums up what I think of Wisconsin. 


The Cray-2 would come in 1985, which was the first multiple CPU design by Cray. It came in at 1.9 Gigaflops. They rearranged memory to allow for more parallelization and used two sets of memory registers. It effectively set the stage for modern processing architectures in a lot of ways, offloading tasks for a dedicated foreground processor to main memory connected over the fastest channels possible to each CPU. But IBM wouldn’t release the first real multi core processor until 2001. And we see this with supercomputers. The techniques used in them come downmarket over time. 


But some of the biggest problems were how to keep the wires close together. The soldering of connecters at that level was nearly impossible. And the thing was hot. So they added, get this, liquid coolant, leading some people to call the Cray-2 “Bubbles.”


By now, Seymour Cray had let other people run the company and thee were competing projects like the Cray X-MP underway. Almost immediately after the release of the Cray-2 Seymour moved to working on the Cray-3 but the project was abandoned and again, Cray found himself wanting to just go do research without priorities shifting what he could do. 


But Seymour always knew best. Again, he’s from Wisconsin. So he left the company with his name and started another company, this one called Cray Computer, where he did manage to finish the Cray-3. But that Cold War war spending from the Cold War dried up. And while he thought of designs for a Cray-4, the company would go bankrupt in 1995. He was athletic and healthy, so in his 70s, why not keep at it?


His next company would focus on massively parallel processing, which would be the trend of the future, but Seymour Cray died from complications to a car accident in 1996. 


He was one of the great pioneers of the computing industry. He set a standard that computers like IBM’s Blue Gene then Summit or China’s Sunway TahuLight or Dell’s Frontera or Cray’s HPE or Fujitsu’s aBCI or Lenovo’s SuperMUC-NG carry on. Those run at between 20 gigaflops to close to 150 gigaflops. Today, the Cray X1E pays homage to it’s ancestor, the great Cray-1. 


But no one does it with style the way the Cray-1 did - and think about this, Moore’s Law says transistors will double every two years. Not to oversimplify things but that means that since the Cray-2 we should have had a 262 gigaflop machine by now. But I guess he’s not here to break down the newer barriers like he did with the von Neumann bottleneck. 


Also, think about this, those early supercomputers were funded by the departments that became the NSA. They even helped fund the development of Cray’s throughout history. So maybe we have hit 262 and it’s just classified. I swoon at that thought. But maybe it’s just that this is where the move from bits to qubits and quantum computing becomes the next significant jump. Who knows?


But hey, thanks for joining me on this episode of the History of Computing Podcast. Do you have a story you want to tell? I plan to run more interviews soon and while we have a cast of innovators that we’re talking to, we’d love even more weird and amazing humans. Hit us up if you want to! And in the meantime, thanks again for listening, we are so lucky to have you. 




Radio Shack: Over 100 Years Of Trends In Technology


Today we’re going to talk about a company that doesn’t get a ton of credit for bringing computing to homes across the world but should: Radio Shack.

Radio Shack was founded by Theodore and Milton Deutschmann in 1921 in downtown Boston. The brothers were all about ham radio. A radio shack was a small structure on a ship that kept the radio equipment at the time. The name was derived from that slightly more generic term, given that one group of customers were radio officers outfitting ships.  


By 1939 they would print a catalog and ship equipment over mail as well. 


They again expanded operations in 1954 and would make their own equipment and sell it as well. But after too much expansion they ran into financial troubles and had to sell the company. When Charles Tandy bought the company for $300,000 in 1962, they had nine large retail stores. 


Tandy had done well selling leather goods and knew how to appeal to hobbyists. He slashed management and cut the amount of stock from 40,000 items to 2,500. The 80/20 rule is a great way to control costs. Given the smaller amount of stock, they were able to move to smaller stores. 


They also started to buy generic equipment and sell it under the Realistic brand, and started selling various types of consumer electronics. They used the locations that people bought electronics over the mail to plan new, small store openings. They gave ownership to store managers. And it worked. The growth was meteoric for the next 16 years. They had some great growth hacks. They did free tube testing. They gave a battery away for free to everyone that came in. They hired electronics enthusiasts. And people loved them. 


They bought Allied Radio in 1970.and continued to grow their manufacturing abilities. 


Tandy would pass away in 1978, leaving behind a legacy of a healthy company, primed for even more growth. Electronics continued to be more pervasive in the lives of Americans and the company continued its rapid growth, looking for opportunities to bring crazy new electronics products into people’s homes. One was the TRS-80. Radio Shack had introduced the computer in 1977 using an operating system from Microsoft. 


It sold really well and they would sell more than 100k of them before 1980. Although after that the sales would slowly go down with competition from Apple and IBM, until they finally sold the business off in the early 90s. But they stayed in computing. They bought Grid Systems Corporation to bring laptops to the masses in 1988. They would buy Computer City in 1991 and the 200 locations would become the Radio Shack Computer Centers. 


They would then focus on IBM compatible computers under the Tandy brand name rather than the TRS line. Computers were on the rise and clearly part of the Radio Shack strategy. I know I’ll never forget the Tandy Computer Whiz Kids that I’d come across throughout my adolescence. 


In the early 90s, Radio Shack was actually the largest personal computer manufacturer in the world, building computers for a variety of vendors, including Digital Equipment Corporation and of course, themselves . Their expertise in acting as an OEM electronics factory turned out to be profitable in a number of ways. They also made cables, video tapes, even antennas. Primarily under the Tandy brand. This is also when they started selling IBM computers in Radio Shack stores. They also tried to launch their own big box retail stores. 


They sold the Radio Shack Computer Centers to a number of vendors, including CompUSA and Fry’s, during their explosive growth, in 1998. They would move from selling IBM to selling Compaq in Radio Shacks at that point. 


Radio Shack hit its peak in 1999. It was operating in a number of countries and had basically licensed the name globally. This was a big year of change, though. This was around the time they sold the Tandy leather side of the business to The Leather Factory, which continues on. They also got rid of the Realistic brand and inked a deal to sell RCA equipment instead. They were restructuring. And it would continue on for a long time and rarely for the better. 


Radio Shack began a slow decline in the upcoming millenia. The move into adjacencies alienated the hobbyists, who had always been the core Radio Shack shopper. And Radio Shack tried to move into other markets, cluing other companies into what their market was worth. 


They had forgotten the lessons learned when Tandy took over the company and had more and more parts in the warehouses. More and more complex sales. More and more bigger stores. Again, the hobbyists were abandoning Radio Shack. By 2004 sales were down. The company started a high pressure plan and started hammering on the managers at the stores, constantly pushing them and by 2004 they rebelled with thousands of managers filing a class action suit. 


And it wasn’t just internal employees. They were voted the worst overall customer experience amongst any retailer for 6 years in a row. Happy Cows make happy milk. And it wasn’t just about store managers. They went through six CEOs from 2006 to 2016.And 2006 was a tough year to kick such things off. They had to close 500 stores that year. 


And the computer business was drying up. Dell, Amazon, Best Buy, Circuit City, and others were eating their lunch. 


By 2009, they would rebrand as just The Shack and started to focus on mobile devices. Hobbyists were confused and there was less equipment on the shelves, driving even more of them online and to other locations. Seeing profit somewhere, they started to sell subscriptions to other services, like Dish Network. 


They would kick off Amazon Locker services in 2012 but that wouldn’t last but a year. They were looking for relevance. 


Radio Shack filed Chapter 11 in 2015 after nearly 3 years of straight losses. And big ones. That’s when they were acquired by General Wireless Inc for just over 26 million dollars. The plan was to make money by selling mobile phones and mobile phone plans at Radio Shacks. They would go into a big deal with Sprint, who would take over leases to half the stores, which would become Sprint stores, and sell mobile devices through Sprint, along with cell plans of course!


And there were law suits. From former debtors, leasers, and even people with gift cards.


Only problem is, General Wireless couldn’t capitalize on the Sprint partnership in quite the way they planned and they went bankrupt in 2017 as well!


I don’t envy Radio Shack CEO Steve Moroneso. Radio Shack was once the largest electronics chain in the world. But a variety of factors came into play. Big box retailers started to carry electronics. The Flavoradio was almost a perfect example of the rise and fall. They made it from the 70s, up until 2001 when they began their decline. It was unchanged throughout all of that growth. But after they got out of the radio business, things just… weren’t right. 


With 500 stores, he has a storied history. A 100 plus year old company, one that grew through multiple waves of technology: from ham radios to CB radios to personal computers in the 70s and 80s to cell phones. But they never really found the next thing once the cell phone market for Radio Shack started to dry up. They went from the store of the tinkerer with employees who cared, to a brand kinda’ without an identity. If that identity will succeed, they need the next wave. Unless it’s too late. 


But we owe them our gratitude for helping the world by distributing many waves of technology. Just as I owe you dear listeners, for tuning in to yet another episode of the History of Computing Podcast. 

As We May Think and the Legacy of Vannevar Bush


Today we’re going to celebrate an article called As We May Think and it’s author, Vannevar Bush.

Imagine it’s 1945. You see the future and prognosticate instant access to all of the information in the world from a device that sits on every person’s desk at their office. Microfiche wouldn’t come along for another 14 years. But you see the future. And the modern interpretations of this future would be the Internet and personal computing. But it’s 1945. There is no transistor and no miniaturization that led to microchips. But you’ve seen ENIAC and you see a path ahead and know where the world is going. And you share it. 


That is exactly what happened in “As We May Think” an article published by Vannevar Bush in The Atlantic. 


Vannevar Bush was one of the great minds in early computing. He got his doctorate from MIT and Harvard in 1916 and went into the private sector. During World War I he built a submarine detector and went back to MIT splitting his time between academic pursuits, inventing, and taking inventions to market. He worked with American Radio and Research Corporation (AMRAD), made millions off an early thermostat company, and founded the American Appliance Company, now known as the defense contracting powerhouse Raytheon. 


By 1927 computing began to tickle his fancy and he built a differential analyzer, or a mechanical computer to do all the maths! He would teach at MIT penning texts on circuit design and his work would influence the great Claude Shannon and his designs would be used in early codebreaking computers. He would become a Vice President of MIT as well as the Dean of the MIT School of Engineering. 


Then came World War II. He went to work at the Carnegie Institute of Science, where he was exposed to even more basic research than during his time with MIT. Then he sat on and chaired the National Advisory Committee for Aeronautics, which would later become NASA - helping you get the Ames Research Crnter and Glenn Research Center started. 


Seems like a full career? Nah, just getting started! 


 he went to President Roosevelt and got the National Defense Research Committee approved. There, they developed antiaircraft guns, radar, and funded the development of ENIAC. Roosevelt then made him head of the Office of Scientific Research and Development who worked on developing the proximity fuse. There he also recruited Robert Oppenheimer to run the Manhattan Project and was there in 1945 for the Trinity Test, to see the first nuclear bomb detonated. 


And that is when he lost a major argument. Rather than treat nuclear weapons like the international community had treated biological weapons, the world would enter into a nuclear arms race. We still struggle with that fallout today. 


He would publish As We May Think in the Atlantic that year and inspire the post World War II era of computing in a few ways. The first is funding. He was the one behind the National Science Foundation. And he advised a lot of companies and US government agencies on R&D through his remaining years sitting on boards, acting as a trustee, and even a regent of the Smithsonian. 


Another was inspiration. As We May Think laid out a vision. Based on all of the basic and applied research he had been exposed to, he was able to see the convergence that would come decades later. ENIAC would usher in the era of mainframes. But things would get smaller. Cameras and microfilm and the parsing of data would put more information at our fingertips than ever. An explosion of new information out of all of this research would follow and we would need to parse it using those computers, which he called a memex. The collective memory of the world.


But he warned of an arms race leading to us destroying the world first.


Ironically it was the arms race that in many ways caused Bush’s predictions to come true. The advances made in computing during the Cold War were substantial. The arms race wasn’t just about building bigger and more deadly nuclear weapons but brought us into the era of transistorized computing and then minicomputers and of course ARPANET. 


And then around the time that basic research was getting defunded by the government due to Vietnam the costs had come down enough to allow Commodore, Apple, and Radioshack to flood the market with inexpensive computers and for the nets to be merged into the Internet. And the course we are on today was set. 


I can almost imagine Bush sitting in a leather chair in 1945 trying to figure out if the powers of creation or the powers of destruction would win the race to better technology. And I’m still a little curious to see how it all turns out. 


The part of his story that is so compelling is information. He predicted that machines would help unlock even faster research, let us make better decisions, and ultimately elevate the human consciousness. Doug Englebart saw it. The engineers at Xerox saw it. Steve Jobs made it accessible to all of us. And we should all look to further that cause.


Thank you for tuning in to yet another episode of the History of Computing Podcast. We are so very lucky to have you.



Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to look at an often forgotten period in the history of computers. The world before DOS. 

I’ve been putting off telling the story of CP/M. But it’s time. Picture this: It’s 1974. It’s the end of the Watergate scandal. The oil crisis. The energy crisis. Stephen King’s first book Carrie is released. The Dolphins demolish my Minnesota Vikings 24-7 in the Super Bowl. Patty Hearst is kidnapped. The Oakland As win the World Series. Muhammad Ali pops George Forman in the grill to win the Heavyweight title. Charles de Gaulle opens in Paris. The Terracotta Army is discovered in China. And in one of the most telling shifts that we were moving from the 60s into the mid-70s, the Volkswagen Golf replaces the Beetle. I mean, the Hippies shifted to Paul Anka, Paper Lace, and John Denver. The world was settling down.

And the world was getting ready for something to happen. A lot of people might not have known it yet, but the Intel 8080 series of chips was about to change the world. Gary Kildall could see it. He’d bought the first commercial microprocessor, the Intel 4004 when it came out in 1971. He’d been enamored and consulted with Intel. He finished his doctorate in computer science and went tot he Naval Postgraduate School in Monterrey to teach and developed Kildall’s Method, to optimize compilers. But then he met the 8080 chip. 

The Intel Intellec-8 was an early computer that he wanted to get an operating system running on. He’d written PL/M or the Programming Language for Microcomputers and he would write the CP/M operating system, short for Control Program/Monitor, loosely based on TOPS-10, the OS that ran on his DECsystem-10 mainframe. 

He would license PL/M through Intel but operating systems weren’t really a thing just yet. By 1977, personal computers were on the rise and he would take it to market though calling the company Digital Research, Inc. His wife Dorothy ran the company. 

And they would go into a nice rise in sales. 250,000 licenses in 3 years. This was the first time consumers could interact with computer hardware in a standardized fashion across multiple systems. They would port the code to the Z80 processors, people would run CP/M on Apple Its, Altair’s, IMSaI, Kaypro, Epson, Osbourne, Commodore and even the trash 80, or TRS-80. The world was hectic and not that standard, but there were really 3 main chips so the software actually ran on 3,000 models during an explosion in personal computer hobbyists. 

CP/M quickly rose and became the top operating system on the market. We would get WordStar, dBase, VisiCalc, MultiPlan, SuperCalc, Delphi, and Turbo Pascal for the office. And for fun, we’d get Colossal Cave Adventure, Gorillas, and Zork. 

It bootstrapped from floppy disks. They made $5 million bucks in 1981. Almost like cocoaine money at the time. Gary got a private airplane. And John Opel from IBM called. Bill Gates told him to. IBM wanted to buy the rights to CP/M. Digital Research and IBM couldn’t come to terms. And this is where it gets tricky. IBM was going to make CP/M the standard operating system for the IBM PC. Microsoft jumped on the opportunity and found a tool called 86-DOS from a company called Seattle Computer Products. The cool thing there is that used the CP/M Api and so would be easy to have compatible software. Paul Allen worked with them to license the software then compiled it for the IBM. This was the first MS DOS and became the standard, branded as PC DOS for IBM. 

Later, Kildall agreed to sell CP/M for $240 on the IBM PCs. The problem was that PC DOS came in at $40. If you knew nothing about operating systems, which would you buy? And so even though it had compatibility with the CP/M API, PC DOS really became the standard. So much so that Digital Research would clone the Microsoft DOS and release their own DR DOS. Kildall would later describe Bill Gates using the following quote: "He is divisive. He is manipulative. He is a user. He has taken much from me and the industry.” While Kildall considered DOS theft, he was told not to sue because the laws simply weren’t yet clear. 

At first though, it didn’t seem to hurt. Digital Research continued to grow. By 1983 computers were booming. Digital Research would hit $45 million in sales. They had gone from just Gary to 530 employees by then. Gangbusters. Although they did notice that they missed the mark on the 8088 chips from Intel and even with massive rises in sales had lost market share to Unix System V and all the variants that would come from that. CP/M would add DOS emulation. 

But sales began to slip. The IBM 5150 and subsequent machines just took over the market. And CP/M, once a dominant player, would be left behind. Gary would move more into research and development but by 1985 resigned as the CEO of Digital Research, in a year where they laid off 200 employees. 

He helped start a show called the Computer Chronicles in 1983. It has been something I’ve been watching a lot recently, researching these episodes and it’s awesome! He was a kinda and wicked smart man. Even to people who had screwed him over. 

As many would after them, Digital Research went into long-term legal drama, involving the US Department of Justice. But none of that saved them. And it wouldn’t save any of the other companies that went there either. Digital Research would sell to Novell for $80 million in 1991 and various parts of the intellectual property would live on with compilers, interpreters, and DR DOS living on. For example, as Caldera OpenDOS. But CP/M itself would be done.  

Kildall would die in a bar in Monterey, California in 1994. One of the pioneers of the personal computer market. From CP/M to disk buffering the data structure that made the CD, he was all over the place in personal computers. And CP/M was the gold standard of operating systems for a few years. 

One of the reasons I put this episode off is because I didn’t know how I would end it. Like, what’s the story here. I think it’s mostly that I’ve heard it said that he could have been Bill Gates. I think that’s a drastic oversimplification. CP/M could have been the operating system on the PC. But a lot of other things could have happened as well. He was wealthy, just not Bill Gates level wealthy. And rather than go into a downward spiral over what we don’t have, maybe we should all be happy with what we have. 

And much of his technology survived for decades to come. So he left behind a family and a legacy. In uncertain times, focus on the good and do well with it. And thank you for being you. And for tuning in to this episode of the History of Computing Podcast. 

The Days Of Our Twitters


Today we’re going to celebrate the explosion and soap-opera-esque management of Twitter. As with many things, it started with an idea. Some people get one idea. Some of these Twitter founders got multiple ideas, which is one of the more impressive parts of this story. And the story of Twitter goes back to 1999. Evan Williams created a tool that gave “push-button publishing for the people.”

That tool was called and ignited a fire in people publishing articles about whatever they were thinking or feeling or working on or doing. Today, we just call it blogging. The service jumped in use and Evan sold the company to Google, where he worked for a bit and then left in 2004 in search of a new opportunity. Seeing the rise of podcasting, Williams founded another company called Odeo, to build a tool for podcasters. They worked away at that, being joined by Noah Glass, Biz Stone, Jack Dorsey, Crystal Taylor, Florian Weber, Blaine Cook, Ray McClure, Rim Roberts, Rabble, Dom, @Jeremy and others. And some investors of course. Apple added podcasts to iTunes and they knew they had to pivot. They’d had these full day sessions brainstorming new ideas. Evan was thinking more and more about this whole incubator kind of thing. Noah was going through a divorce and one night he and Jack Dorsey were going through some ideas for new pivots or companies. Jack had just been turned on to text messaging and mentioned this one idea about sharing texts to groups. The company was young and full of raver kids at the time and the thought was you could share where you are and what you were doing. Noah thought you could share your feelings as well. Since it was through text, you had a maximum 140 characters. It started as a side project. Jack and Florian Webber built a prototype. It slowly grew into a real product. They sold the remaining assets of Odeo and Twitter and was finally spun off into its own company in 2007. Noah was the first CEO. But he was ousted in 2007 when Jack Dorsey took over. They grew slowly during the year but jumped into the limelight at South By Southwest, taking home the Web Award. I joined Twitter in October of 2007. To be honest, I didn’t really get it yet. But they started to grow. And rapidly. They were becoming a news source. People were tweeting to their friends. They added the @ symbol to mention people in posts. They added the ability to retweet, or repost a tweet from someone else. And of course hashtags. Servers crashed all the time. The developers worked on anything they wanted. And after a time, the board of Twitter, which primarily consisted of investors, got tired of the company not being run well and outside Jack in 2008, letting Evan run the company. And I do like to think of the history of Twitter in stages. Noah was the incubator. He and Jack worked hard and provided a vision. Noah came up with the name, Jack helped code the site and keep it on track. Once Noah was gone they were a cool hacker collective that went into hyper growth. There wasn’t a ton of structure and the company reflected the way people used the service, a bit chaotic. But with Evan in, the hyper growth accelerated. Twitter added lists in 2009, allowing you to see updates from people you weren’t following. They were still growing fast. By 2010 there were 50 million tweets a day. Months later there were 65 million. And Jack Dorsey, while no longer at Twitter, was the media darling face of Twitter. He would found Square that year. And Square would make a dent in the multi-verse by allowing pretty much anyone to take a credit card using their phone, pretty much any time. That would indirectly lead to coffee shops, yoga studios, and any number of kinds of businesses popping up all over the world. They bought an app called Tweetie which became the Twitter app many of us use today. But servers could still crash. There was still no revenue. So Evan brought in Dick Costolo, founder of feedburner, to become the Chief Operating Officer. Dick would be named CEO. Dorsey, fuming ever since his ousting, had been behind the switch. This is where Twitter kinda’ grew up. Under Dick the site got stable finally. The users continued to grow. They started to make money. Lots of money. By 2011 they added URL shortening using the domain because many of us would use a URL shortening service to conserve characters. Twitter would continue to grow and go public in 2013. By then, they’d had offers to buy equity from musicians, actors, sports stars, and even former Vice Presidents. And Twitter would continue to grow. Jack Dorsey would lead Square to an IPO in 2015. Obama would send his first tweet that same year. Shortly afterwards, Dick stepped down as the CEO of Twitter and Jack came back. Grand plans work out I suppose. Usually people don’t get back together after the breakup. But Jack did. In 2016, Donald Trump was elected president of the United States. While Obama had used Twitter, Trump took it to a whole new level, announcing public policy there sometimes before other politicians knew. And this is where Twitter just gets silly. Hundreds of millions of people log on and argue. Not my thing. I mostly just post links to these episodes these days. Jack Dorsey is now the CEO of both Square and Twitter. He catches flack for it every now and then - but it’s mostly working. He co-founded two great companies and he likely doesn’t want to risk losing control of either. Evan Williams founded Medium in 2012, another blogging service. Blogging, micro-blogging, then back to blogging. He has had three great companies he co-founded. And continues helping startups. Biz Stone, often the heart of twitter would found Jelly, which was sold to Pinterest. The fourth co-founder, Noah Glass, took some time away from startups. His part in the founding of Twitter was often under-estimated. But today, he’s the CEO of and serves on the board of a number of non-profits. The post-PC era, the social media era, the instant everything era. Twitter symbolizes all of it, kicked off when Jack sent the first message on March 21, 2006, 9:50 p.m. It read, "just setting up my twttr.". From a rag-tag group of kids who went to clubs to a multi-billion dollar social media behemoth, they also show the growth stages of network effect companies. The incubation period led by a passionate Noah. The release and rise period full of doing everything it takes and people working 20 hour days symbolized by the Jack part 1. The meteoric rise and beginnings of getting their ducks in order tenure of Evan. The growing up phase where they got profitable and stable with Dick. And then the Steve Jobs-esque reinvention of Jack and his return, slowing growth and reducing risk. The founders all felt like Twitter was theirs. And it was. A lot of founders think they’re going to change the world. And some actually do. And for the effort they put into putting a dent in the universe, we thank them. And you dear listeners, we think you too, for giving us the opportunity to share these stories of betrayal and shame and rebirth. We are so lucky to have you. Have a great day!

Commodore Computers


Today we’re going to talk through the history of the Commodore. That history starts with Idek Trzmiel, who would become Jack Tramiel when he immigrated to the United States. Tramiel was an Auschwitz survivor and Like many immigrants throughout history, he was a hard worker. He would buy a small office repair company in the Bronx with money he saved up driving taxis in New York and got a loan to help by the company through the US Army.

He wanted a name that reflected the military that had rescued him from the camp so he picked Commodore and incorporated the company in Toronto. He would import Czeck typewriters through Toronto and assemble them, moving to adding machines when lower-cost Japanese typewriters started to enter the market. By 1962, Commodore got big enough to go public on the New York Stock Exchange. Those adding machines would soon be called calculators when they went from electromechanical devices to digital, with Commodore making a bundle off the Minuteman calculators. Tramiel and Commodore investor Irving Gould flew to Japan to see how to better compete with manufacturers in the market.

They got their chips to build the calculators from MOS Technology and the MOS 6502 chip took off quickly becoming one of the most popular chips in early computing. When Texas Instruments, who designed the chips, entered the calculator market, everyone knew calculators were a dead end. The Altair had been released in 1975. But it used the Intel chips. Tramiel would get a loan to buy MOS for $3 million dollars and it would become the Commodore Semiconductor Group. The PC revolution was on the way and this is where Chuck Peddle, who came to Commodore from the acquisition comes in. Seeing the 6502 chips that MOS started building in 1975 and the 6507 that had been used in the Atari 2600, Pebble pushed to start building computers.

Commodore had gotten to 60 million in revenues but the Japanese exports of calculators and typewriters left them needing a new product. Pebble proposed they build a computer and developed one called the Commodore PET. Starting at $800, the PET would come with a MOS 6502 chip - the same chip that shipped in the Apple I that year. It came with an integrated keyboard and monitor. And Commodore BASIC in a ROM. And as with many in that era, a cassette deck to load data in and save it. Commodore was now a real personal computer company. And one of the first. Along with the TRS-80, or Trash 80 and Apple when the Apple II was released they would be known as the Trinity of Personal Computers.

By 1980 they would be a top 3 company in the market, which was growing rapidly. Unlike Apple, they didn’t focus on great products or software and share was dropping. So in 1981 they would release the VIC-20. This machine came with Commodore BASIC 2.0, still used a 6502 chip. But by now prices had dropped to a level where the computer could sell for $299. The PET would be a computer integrated into a keyboard so you brought your own monitor, which could be composite, similar to what shipped in the Apple IIc. And it would be marked in retail outlets, like K-Mart where it was the first computer to be sold.

They would outsource the development of the VICModem and did deals with the Source, CompuServe, and others to give out free services to get people connected to the fledgeling internet. The market was getting big. Over 800 software titles were available. Today you can use VICE, a VIC-20 emulator, to use many of them! But the list of vendors they were competing with would grow, including the Apple II, The TRS-80, and the Atari 800. They would sell over a million in that first year, but a new competitor emerged in the Commodore 64.

Initially referred to as the VIC-40, the Commodore 64 showed up in 1982 and would start at around $600 and came with the improved 6510 or 8500 MOS chip and the 64k of ram that gave it its name. It is easily one of the most recognizable computer names in history. IT could double as a video game console. Sales were initially slow as software developers caught up to the new chips - and they kinda’ had to work through some early problems with units failing. They still sold millions and millions by the mid 1980s. But they would need to go into a price war with Texas Instruments, Atari, and other big names of the time. Commodore would win that war but lost Tramiel along the way. He quit after disagreements with Gould, who brought in a former executive from a steel company with no experience in computers. Ironically, Tramel bought Atari after he left.

A number of models would come out over the next few years with the Commodore MAX, Communicator 64, the SX-64, the C128, the Commodore 64 Game System, the 65, which was killed off by Irving Gould in 1991. And by 1993, Gould had mismanaged the company. But Commodore would buy Amiga for $25 million in 1984. They wouldn’t rescue the company with a 32 bit computer. After the Mac and the IBM came along in 1984 and after the downward pressures that had been put on prices, Commodore never fully recovered. Yes, they released systems. Like the Amiga 500 and ST, but they were never as dominant and couldn’t shake the low priced image for later Amiga models like one of the best machines made for its time, the Amiga 1000. Or the 2000s to compete with the Mac or with entries in the PC clone market to compete with the deluge of vendors that did that.

They even tried a MicrosoftBASIC interpreter and their own Amiga Unix System V Release variant. But, ultimately by 1994 the company would go into bankruptcy with surviving subsidiaries going through that demise that happens where you end up with your intellectual property somehow being held by Gateway computers. More on them in a later episode.

I do think the story here is a great one. A person manages to survive Auschwitz, move to the United States, and build a publicly traded empire that is easily one of the most recognizable names in computing. That survival and perseverance should be applauded. Tramiel would run Atari until he sold it in the mid-90s and would cofound the United States Holocaust Memorial Museum. He was a hard negotiator and a competent business person. Today, in tech we say that competing on price is a race to the bottom.

He had to live that. But he and his exceptional team at Commodore certainly deserve our thanks, for helping to truly democratize computing, putting low-cost single board machines on the shelves at Toys-R-Us and K-mart and giving me exposure to BASIC at a young age. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky you listen to these stories. Have a great day.

The Brief History Of The Battery


Most computers today have multiple batteries. Going way, way, back, most had a CMOS or BIOS battery used to run the clock and keep BIOS configurations when the computer was powered down. These have mostly centered around the CR2032 lithium button cell battery, also common in things like garage door openers and many of my kids toys!


Given the transition to laptops for a lot of people now that families, schools, and companies mostly deploy one computer per person, there’s a larger battery in a good percentage of machines made. Laptops mostly use lithium ion batteries, which 


The oldest known batteries are “Baghdad batteries”, dating back to about 200BC. They could have been used for a number of things, like electroplating. But it would take 2,000 years to get back to it. As is often the case, things we knew as humans, once backed up with science, became much, much more. First, scientists were studying positive and negative elements and forming an understanding that electricity flowed between them. Like the English natural scientist, William Gilbert  - who first established some of the basics of electricity and magnetism. And Sir Thomas Browne, who continued to refine theories and was the first to call it “electricity.” Then another British scientist, Peter Collinson, sent Franklin an electricity tube, which these previous experiments had begun to produce. 


Benjamin Franklin spent some time writing back and forth with Collinson and flew a kite and proved that electrical currents flowed through a kite string and that a metal key was used to conduct that electricity. This proved that electricity was fluid. Linked capacitors came along in 1749. That was 1752 and Thomas-Francois Dalibard also proved the hypothesis using a large metal pole struck by lightning. 


Budding scientists continued to study electricity and refine the theories. 1799, Alessandro Volta built a battery by alternating zinc, cloth soaked in brine, and silver and stacking them. This was known as a voltaic pile and would release a steady current. The batteries corroded fast but today we still refer to the resistance of an ohm when the current of an amp flows through it as a volt. Suddenly we were creating electricity from an electrochemical reaction. 


People continued to experiment with batteries and electricity in general. Giuseppe Zamboni, another Italian, physicist invented the Zamboni pile in 1812. Here, he switched to zinc foil and manganese oxide. Completely unconnected, Swedish chemist Johann August Arfvedson discovered Lithium in 1817. Lithium. Atomic number 3. Lithium is an alkali metal found all over the world. It can be used to treat manic depression and bipolar disorder. And it powers todays modern smart-everything and Internet of thingsy world. But no one knew that yet. 


The English chemist John Frederick Daniell invented the Daniell cell in 1836, building on the concept but using a copper plate in a copper sulfate solution in a plate and hanging a zinc plate in the jar or beaker. Each plate had a wire and the zinc plate would become a negative terminal, while the copper plate would be a positive terminal and suddenly we were able to reliably produce electricity. 


Robert Anderson would build the first electric car using a battery at around the same time, but Gaston Plante would build the first rechargeable battery in 1859, which is very much resembles the ones in our cars today. He gave us the lead-acid battery, switching to lead oxide in sulfuric acid. 


In the 1860s the Daniell cell would be improved by Callaud and a lot of different experiments continued on. The Gassner dry cell came from Germany in 1886, mixing ammonium chloride with plaster of Paris and adding zinc chloride. Shelf life shot up. The National Carbon Company would swap out the plaster of Paris with coiled cardboard. That Colombia Dry Cell would be commercially sold throughout the United States and National Carbon Company, which would become Eveready, who makes the Energizer batteries that power the weird bunny with the drum. 


Swedish scientist Jungner would give us nickel-cadmium or NiCd in 1899, but they were a bit too leaky. So Thomas Edison would patent a new model in 1901, iterations of these are pretty much common through to today. Litum would start being used shortly after by GN Lewis but would not become standard until the 1970s when push button cells started to be put in cameras. Asahi Chemical out of Japan would then give us the Lithium Ion battery in 1985, brought to market by Sony in 1991, leading to  John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino winning the Nobel Prize in Chemistry in 2019. 


Those lithium ion batteries are used in most computers and smart phones today. The Osborne 1 came in 1981. It was what we now look back on as luggable computer. A 25 pound computer that could be taken on the road. But you plugged it directly into the wall. But the Epson HX-20 would ship the same year, with a battery, opening the door to batteries powering computers. 


Solar cells and other larger batteries require much larger amounts. This causes an exponential increase in demand and thus a jump in the price, making it more lucrative to mine. 


Mining lithium to create these batteries is, as with all other large scale operations taken on by humans, destroying entire ecosystems, such as those in Argentina, Bolivia, Chile, and the Tibetan plateau. Each ton of lithium takes half a million gallons of water, another resource that’s becoming more precious. And the waste is usually filtered back into the ecosystem. Most other areas mine lithium out of rock using traditional methods, but there’s certainly still an environmental impact. There are similar impacts to mining Cobalt and Nickel, the other two metals used in most batteries. 


So I think we’re glad we have batteries. Thank you to all these pioneers who brought us to the point that we have batteries in pretty much everything. And thank you, listeners, for sticking through to the end of this episode of the History of Computing Podcast. We’re lucky to have you. 

The Data General Nova


Today we’re going to talk through the history of the Data General Nova. Digital Equipment was founded in 1957 and released a game changing computer, the PDP-8, in 1965. We covered Digital in a previous episode, but to understand the Data General Nova, you kinda’ need to understand the PDP. It was a fully transistorized computer and it was revolutionary in the sense that it brought interactive computing to the masses. Based in part on research from work done for MIT in the TX-0 era, the PDP made computing more accessible to companies that couldn’t spend millions on computers and it was easier to program - and the PDP-1 could be obtained for less than a hundred thousand dollars. You could use a screen, type commands on a keyboard for the first time and it would actually output to screen rather than reading teletypes or punch cards. That interactivity unlocked so much.

The PDP began the minicomputer revolution. The first real computer game Spacewar! Was played on it and the adoption increased. The computers got faster. They could do as much as large mainframes. The thousands of transistors were faster and less error-prone than the old tubes. In fact, those transistors signaled that the third generation of computers was upon us. And people who liked the PDP were life-long converts. Fanatical even. The PDP evolved until 1965 when the PDP-8 was released. This is where Edson de Castro comes in, acting as the project manager for the PDP-8 development at Digital. 3 years later, he, Henry Burkhardt, and Richard Sogge of Digital would be joined by Herbert Richman a sales person from Fairchild Semiconductor.

They were proud of the PDP-8. It was a beautiful machine. But they wanted to go even further. And they didn’t feel like they could do so at Digital. They would build a less expensive minicomputer that opened up even more markets. They saw new circuit board manufacturing techniques, new automation techniques, new reasons to abandon the 12-bit CPU techniques. Edson had wanted to build a PDP with all of this and the ability to use 8 bit, 16 bit, or 32 bit architectures, but it got shut down at Digital. So they got two rounds of venture capital at $400,000 each and struck out on their own. They wanted the computer to fit into a 19-inch rack mount. That choice would basically make the 19 inch rack the standard from then on.

They wanted the machines to be 16-bit, moving past the 8 or 12 bit computers common in mini-computing at the time. They used an accumulator-based architecture, which is to say that there was a CPU that had a register that stored the results of various bits of code. This way you weren’t writing the results of all the maths into memory and then sending it right back to the CPU. Suddenly, you could do infinitely more math! Having someone from Fairchild really unlocked a lot of knowledge about what was happening in the integrated circuit market. They were able to get the price down into the thousands, not tens of thousands.

You could actually buy a computer for less than 4 thousand dollars.

The Nova would ship in 1969 and be an instant success with a lot of organizations. Especially smaller science labs like one at the University of Texas that was their first real paying cusotmer. Within 6 months they sold 100 units and within the first few years, they were over $100 million in sales. They were seeking into Digital’s profits. No one would have invested in Digital had they tried to compete head-on with IBM. Digital had become the leader in the minicomputer market, effectively owning the category. But Nova posed a threat. Until they decided to get into a horse race with Digital and release the SuperNOVA to compete with the PDP-11. They used space age designs. They were great computers. But Digital was moving faster. And Data General started to have production and supply chain problems, which led to law suits and angry customers. Never good.

By 1977 Digital came out with the VAX line, setting the standard to 32-bit. Data General was late to that party and honestly, after being a market leader in low-cost computing they started to slip. By the end of the 70s microchips and personal computers would basically kill minicomputers and while transitioning from minicomputers to servers, Data General never made quite the same inroads that Digital Equipment did. Data General would end up with their own DOS, like everyone their own UNIX System V variant, one of the first portable computers, but by the mid-80s, IBM showed up on the market and Data General would make databases and a number of other areas to justify what was becoming a server market.

In fact, the eventual home for Data General would be to get acquired by EMC and become CLaRiiON under the EMC imprint. It was an amazing rise. Hardware that often looked like it came straight out of Buck Rogers. Beautiful engineering. But you just can’t compete on price and stay in business forever. Especially when you’re competing with your former bosses who have much much deeper pockets.

EMC benefited from a lot of these types of acquisitions over the years, to become a colossus by the end of the 2010s. We can thank Data General and specifically the space age nova, for helping set many standards we use today. We can thank them for helping democratize computing in general. And if you’re a heavy user of EMC appliances, you can probably thank them for plenty of underlying bits of what you do even through to today. But the minicomputer market required companies to make their own chips in that era and that was destroyed by the dominance of Intel in the microchip industry. It’s too bad.

So many good ideas. But the costs to keep up turned out to be too much for them, as with many other vendors. One way to think about this story. You can pick up on new manufacturing and design techniques and compete with some pretty large players, especially on price. But when the realities of scaling an operation come you can’t stumble or customer confidence will erode and there’s a chance you won’t get to compete for deals again in the future. But try telling that to your growing sales team.

I hear people say you have to outgrow the growth rate of your category. You don’t. But you do have to do what you say you will and deliver. And when changes in the industry come, you can’t be all over the place. A cohesive strategy will help you whether the storm. So thank you for tuning into this episode of the History of Computing Podcast. We are so lucky you chose to join us and we hope to see you next time! Have a great day!

Airbnb: The Rise and Rise of the Hospitality Industry


Today we’re going to talk through the history of Airbnb. But more importantly, we’re going to look at what brought the hospitality industry to a place so ripe to be disrupted. The ancient Greeks, Romans, Persians, and many other cultures provided for putting travelers up while visiting other cities in one way or another. Then inns begins to rise from roads connecting medieval Europe, complete with stables and supplies to get to your next town. The rise of stagecoaches gave way to a steady flow of mail and a rise in travel over longer distances for business gave way to much larger and fancier hotels in the later 1700s and 1800s. In 1888 Cesare Ritz became the first manager of the Savoy hotel in London, after time at the Hotel Splendide in Paris and other hotels. He would open the Paris Ritz in 1898 and expand with properties in Rome, Frankfurt, Palermo, Madrid, Cairo, Johannesburg, Monte Carlo, and of course London. His hotels were in fact so fancy that he gave us the term ritzy. Ritz is one of the most lasting names but this era was the first boom in the hotel industry, with luxury hotels popping up all over the world. Like the Astor, the Waldorf Astoria, the Plaza, the Taj Mahal, and the list goes on. The rise of the hotel industry was on its way when Conrad Hilton bought the Mobley Hotel in Cisco Texas in 1919. By 1925 he would open the Dallas Hilton and while opening further hotels nearly ruined him in the Great Depression he emerged into the post World War II boom times establishing a juggernaut now boasting 568 hotels. Best Western would start in 1946 and now has 4,200 locations. After World War II we saw the rise of the American middle class and the great American road trip. Chains exploded. Choice Hotels which acts as more of a franchiseier established in 1939 sits with 7,000 locations but that’s spread across Extended Stay, MainStay, Quality Inn, Cambria Hotels, Comfort Inn, and other brands. Holiday Inn was founded 1952 in the growing post-war boom time by Kemmons Wilson and named after the movie by the same name. The chain began with that first hotel in 1952 and within 20 years hit 1,400 Holiday Inns landing Wilson on the cover of Time as “The Nation’s Innkeeper.’ They would end up owning Harrah's Entertainment, Embassy Suites Hotels, Crowne Plaza, Homewood Suites, and Hampton Inn now sitting with 1,173 hotels. The Ramada would start the next year by Marion Isbell and has now grown to 811 locations. Both of them started their companies due to the crappy hotels that were found on the sides of roads, barely a step above those founded in the medieval days. Howard Johnson took a different path, starting with soda shops then restaurants and opening his first hotel in 1954, expanding to 338 at this point, and now owned by Wyndham Hotels, a much later entrant into the hotel business. Wyndham also now owns Ramada. The 1980s led to a third boom in hotels with globalization, much like it was the age of globalization for other brands and industries. The oil boom in the Middle East, the rising European Union, the opening up of Asian markets. And as they grew, they used computers to build software to help and cut costs and enable loyalty programs. It was an explosion of money and profits and as the 80s gave way to the 90s, the Internet gave customers the ability to comparison shop and the rise of various sites that aggregated hotel information, with Expedia, Travelocity, American Express, even Concur rising - sites came and went quickly and made it easy for AccorHotels to research and then buy Raffles, Sofitel, Novotel and for Intercontinental and others to user in the era of acquisitions and mergers. Meanwhile the Internet wasn’t just about booking hotels at chains easily. VRBO began in 1995 when David Clouse wanted to rent his condo in Breckenridge and got sick of classifieds. Seeing the web on the rise, he built a website and offered subscriptions to rent properties for vacations, letting owners and renters deal directly with one another to process payments. Vacation Rentals By Owner, or VRBO would expand through the 90s. And then Paris Hilton happened. Her show The Simple Life in 2003 led to a 5 year career that seemed to fizzle at the Toronto International Film Festival in 2008 with the release of a critical documentary of her called Paris, Not France. The mergers and acquisitions and globalization and being packed in stale smokey rooms like sardines seemed to have run its course. Boutique hotels were opening, a trend that started in the 90s and by 2008 W Hotels was expanding into Europe, now with 55 properties around the world. And that exemplifies what some of this backlash was against big chains that was starting to brew. In 2004, CEH Holdings bought a few websites to start and in 2006 raised $160 million in capital to buy VRBO and gain access to their then 65,000 properties. would be acquired by Expedia in 2015 for $3.9 billion, but not before a revolution in the hospitality industry began. That revolution started with 2 industrial design students. Brian Chesky and Joe Gebbia had come from the Rhode Island School of Design. After graduation Gebbia would move to San Francisco and Chesky would move to Los Angeles. They had worked on projects together in college and Gebbia bugged Chesky about moving to San Francisco to start a company together for a few years. By 2007 Chesky gave in and made the move, becoming one of Gebbia’s two roommates. It was the beginning of the Great Recession. They were having trouble making rent. The summer of 2008 brought the Industrial Designers Society of America’s Industrial Design Conference to San Francisco. They had the idea to take a few air beds from a recent camping trip and rent them out in their apartment. Paris Hilton would never have done that. They reached out to a former roommate of theirs, Nathan Blecharczyk. He’s a Harvard alum and pretty rock solid programmer and signed on to be a co-founder, building them a website in Ruby on Rails. They rented those three airbeds out and called their little business They thought they were on to something. I mean, who wouldn’t want to rent an airbed and crash on someone’s kitchen floor?!?! But reality was about to come calling. Venture capital was drying up due to the deepening recession. They tried to raise funding and failed. And so far their story seems pretty standard. But this is where I start really liking them. They bought a few hundred boxes of cereal and made '"Obama O's" and "Cap'n McCain's" to sell at the Democratic National Convention in 2008 for $40 per box. They sold $30,000 worth, enough to bootstrap the company. They would go to South By South West and visit events, growing slowly in New York and San Francisco. The money would last them long enough to make it into Y Combinator in 2009. Paul Graham and the others at Y Combinator has helped launch 2,000 companies, including Docker, DoorDash, Dropbox, GitLab, Gusto, Instacart, Reddit, Stripe, Twitch, and Zapier. They got $20,000 from Y Combinator. They changed the site to and a people started to book more and more stays - and not just with airbeds, but rending their full homes out. They charged 3% of the booking as a fee - a number that hasn’t really changed in all these years. They would get $600,000 in funding from Sequoia Capital in 2009 when they finally got up to 2,500 listings and had 10,000 users. Nothing close to what had, but they would get more funding from Sequoia and added Greylock to the investors and by the close of 2010 they were approaching a million nights booked. From here, the growth got meteoric. They won the app award during a triumphant return to South By South West in 2011 and went international, opening an office in London and expanding bookings to 89 countries. The investments, the advertising, the word of mouth, the media coverage. So much buzz and so much talk about innovation and disruption. The growth was explosive. They would iterate the website and raised another $112 million dollars in venture capital. And by 2012 they hit 10 million nights booked. And that international’s expansion paid off with well over half being outside of the United States. Growth of course led to problems. A few guests trashed their lodgings and Airbnb responded with a million dollar policy to help react to those kinds of things in the future. Some of the worse aspects of humanity can be seen on the web. They also encountered renters trying to discriminate based on race. So they updated their policies and took a zero tolerance approach. More importantly, they responded that they didn’t have to think of such things given the privilege of having a company founded by three white guys. They didn’t react with anger or displacement. They said we need to be better, with every problem that came up. And the growth continued. Doubling every year. They released a new logo and branding in 2014 and by 2016 were valued at $30 billion dollars. They added Trips, which seems to still be trying to catch up to what Groupon started doing for booking excursions years ago. During the rise of AirBNB we saw an actual increase in hotel profits. Customers are often millennials who are traveling more and more, given the way that the friction and some of the cost has been taken out of travel. The average age of a host is 43. And of the hosts I know, I can wager that Airbnb rentals have pumped plenty of cash back into local economies based on people taking better care of their homes, keeping fresh paint, and the added tourism spend when customers are exploring new cities. And not just visiting chains. After all, you stay at Airbnb for the adventure, not to go to shop for the same stuff at Forever 21. Even if you take out the issues with guests trashing places and racism, it still hasn’t all been sunshine and unicorns. AirBNB has been in legal battles with New York and a few other cities for years. Turns out that speculators and investors cause extra strain on an already over-burdened housing market. If you want to see the future of living in any dense population center, just look to New York. As the largest city in the US, it’s also the largest landlord of any public institution with over 400,000 tenants. And rent is rising almost twice as fast as incomes with lower income rents going up faster than those of the wealth. Independent auditors claim that AirBNB is actually accountable for 9.2 percent of that. But 79 percent of hosts use their Airbnb earnings to afford their apartments.And if many of the people that count on AirBNB to make their rent can’t afford their apartments. AirBNB argues their goal is to have “one host, one home” which is to say they don’t want a lot of investors. After all, will most investors want to sit around the kitchen table and talk about the history of the city or cool tidbits about neighborhoods. Probably not. AirBNB was started to offer networking opportunities and a cool place to stay that isn’t quite so… sterile. Almost the opposite of Paris Hilton’s life, at least according to TMZ and MTV shows. San Francisco and a number of other cities have passed ordinances as well, requiring permits to rent homes through AirBNB and maximizing the number of days a home can be rented through the service, often to about two thirds of a year. But remember, AirBNB is just the most visible but not the only game in town. Most category leaders have pre-existing competition, like VRBO and HomeAway. And given the valuation and insane growth of AirBNB, it’s also got a slew of specialized competitors. This isn’t to say that they don’t contribute to the problems with skyrocketing housing costs. They certainly do. As is often the case with true disruptors, Pandora’s Box is open and can’t be closed again. Regulation will help to limit the negative impacts of the disruption but local governments will alienate a generation that grew up with a disruption if they are overly-punitive. And most of the limits in place are easily subverted anyways. For example, if there’s a limit on the number of nights you can rent, just do half on VRBO and the other half on Airbnb. But no matter the problems, AirBNB continues to grow. They react well. Gebbia, now the CEO, has a deep pipeline of advisors he can call on in times of crisis. Whether corporate finance, issues with corporate infighting, crisis management, or whatever the world throws at them, the founders and the team they’ve surrounded themselves with have proven capable of doing almost anything. Today, AirBNB handles over half a million transactions per night. They are strongest with millineals, but get better and better at expanding out of their core market. One adjacency would be corporate bookings through a partnership with Concur and others, something we saw with Uber as well. Another adjacency. They now make more money than the Hilton and Hilton subsidiaries. Having said that, the major hotel chains are all doing better financially today than ever before and continue to thrive maybe despite, or maybe because AirBNB. That might be misleading though, revenue per room is actually decreasing correlative to the rise of AirBNB. And of course that’s amplified at the bottom tier of hotels. Just think of what would have happened had they not noticed that rooms were selling out for a conference in 2007. Would what we now call the “sharing” economy be as much a thing? Probably. Would someone else have seized the opportunity? Probably. But maybe not. And hopefully the future will net a more understanding and better connected society once we’ve all get such intimate perspectives on different neighborhoods and the amazing little ecosystems that humanity has constructed all over the world. That is the true disruption: in an age of global sterility, offering the most human of connections. As someone who loves staying in quirky homes on Airbnb, a very special thanks to Chesky, Gebbia, Blecharczyk, and the many, many amazing people at Airbnb. Thank you for reacting the way you do to problems when they arise. Thank you for caring. Thank you for further democratizing and innovating hospitality and experiences. And most importantly, thank you for that cabin by the lake a few months ago. That was awesome! And thanks to the listeners who tuned in to this episode, of the History of Computing Podcast. Have a great day!

The Evolution (and De-Evolution) of the Mac Server


Todays episode is on one of the topics I am probably the most intimate with that we’ll cover: the evolution of the Apple servers and then the rapid pivot towards a much more mobility-focused offering. Early Macs in 1984 shipped with AppleTalk. These could act as a server or workstation. But after a few years, engineers realized that Apple needed a dedicated server platform. Apple has had a server product starting in 1987 that lives on to today. At Ease had some file and print sharing options. But the old AppleShare (later called AppleShare IP server was primarily used to provide network resources to the Mac from 1986 to 2000, with file sharing being the main service offered. There were basically two options. At Ease, which ran on the early Mac operating systems and A/UX, or Apple Unix. This brought paged memory management and could run on the Macintosh II through the Centris Macs. Apple Unix shipped from 1988 to 1995 and had been based on System V. It was a solidly performing TCP/IP machine and introduced the world of POSIX. Apple Unix could emulate Mac apps and once you were under the hood, you could do pretty much anything you might do in another Unix environment. Apple also took a stab at early server hardware in the form of the Apple Network Server, which was announced in 1995 when Apple Unix went away, for the Quadra 950 and a PowerPC server sold from 1996 to 1997, although the name was used all the way until 2003. While these things were much more powerful and came with modern hardware, they didn’t run the Mac OS but ran another Unix type of operating system, AIX, which had begun life at about the same time as Apple Unix and was another System V variant, but had much more work done and given financial issues at Apple and the Taligent relationship between Apple and IBM to build a successor to Mac OS and OS/2, it made sense to work together on the project. Meanwhile, At Ease continued to evolve and Apple eventually shipped a new offering in the form of AppleShare IP, which worked up until 9.2.2. In an era before, as an example, you needed to require SMTP authentication, AppleShare IP was easily used for everything from file sharing services to mail services. An older Quadra made for a great mail server so your company could stop paying an ISP for some weird email address like that AOL address you got in college, and get your own domain in 1999! And if you needed more, you could easily slap some third party software on the hosts, like if you actually wanted SMTP authentication so your server didn’t get used to route this weird thing called spam, you could install Communigator or later Communigate Pro. Keep in mind that many of the engineers from NeXT after Steve Jobs left Apple had remained friends with engineers from Apple. Some still actually work at Apple. Serving services was a central need for NEXTSTEP and OPENSTEP systems. The UNIX underpinnings made it possible to compile a number of open source software packages and the first web server was hosted by Tim Berners Lee on a NeXTcube. During the transition over to Apple, AppleShare IP and services from NeXT were made to look and feel similarly and turned into Rhapsody from around 1999 and then Mac OS X Server from around 2000. The first few releases of Mac OS X Server, represented a learning curve for many classic Apple admins, and in fact caused a generational shift in who administered the systems. John Welch wrote books in 2000 and 2002 that helped administrators get up to speed. The Xserve was released in 2002 and the Xserve RAID was released in 2003. It took time, but a community began to form around these products. The Xserve would go from a G3 to a G4. The late Michael Bartosh compiled a seminal work in “Essential Mac OS X Panther Server Administration” for O’Reilly Media in 2005. I released my first book called The Mac Tiger Server Black Book in 2006. The server was enjoying a huge upswing in use. Schoun Regan and Kevin White wrote a Visual QuickStart for Panther Server. Schoun wrote one for Tiger Server. The platform was growing. People were interested. Small businesses, schools, universities, art departments in bigger companies. The Xserve would go from a G4 to an Intel processor and we would get cluster nodes to offload processing power from more expensive servers. Up until this point, Apple never publicly acknowledged that businesses or enterprises used their device so the rise of the Xserve advertising was the first time we saw that acknowledgement. Apple continued to improve the product with new services up until 2009 with Mac OS X Server 10.6. At this point, Apple included most services necessary for running a standard IT department for small and medium sized business in the product, including web (in the form of Apache), mail, groupware, DHCP, DNS, directory services, file sharing, and even web and wiki services. There were also edge case services such as Podcast Producer for automating video and content workflows, Xsan, a clustered file system, and in 2009 even purchased a company called Artbox, whose product was rebranded as Final Cut Server. Apple now had multiple awesome, stable products. Dozens of books and websites were helping built a community and growing knowledge of the platform. But that was a turning point. Around that same time Apple had been working towards the iPad, released in 2010 (although arguably the Knowledge Navigator was the first iteration, conceptualized in 1987). The skyrocketing sales of the iPhone led to some tough decisions. Apple no longer needed to control the whole ecosystem with their server product and instead began transitioning as many teams as possible to work on higher profit margin areas, reducing focus on areas that took attention away from valuable software developers who were trying to solve problems many other vendors had already solved better. In 2009 the Xserve RAID was discontinued and the Xserve went away the following year. By then, the Xserve RAID was lagging and for the use cases it served, there were other vendors whose sole focus was storage - and who Apple actively helped point customers towards. Namely the Promise array for Xsan. A few things that were happening around the same time. Apple could have bought Sun for less than 10% of their CASH reserves in 2010 but instead allowed Oracle to buy the tech giant. Instead, Apple released the iPad. Solid move. They also released the Mac Mini server, which while it lacked rack and stack options like an ipmi interface to remotely reboot the server and dual power supplies, was actually more powerful. The next few years saw services slowly pealed off the server. Today, the Mac OS X Server product has been migrated to just an app on the App Store. Today, macOS Server is meant to run Profile Manager and be run as a metadata controller for Xsan, Apple’s clustered file system. Products that used to compete with the platform are now embraced by most in the community. For the most part, this is because Apple let Microsoft or Linux-based systems own the market for providing features that are often unique to each enterprise and not about delighting end users. Today building server products that try to do everything for everyone seems like a distant memory for many at Apple. But there is still a keen eye towards making the lives of the humans that use Apple devices better, as has been the case since Steve Jobs mainstreamed the GUI and Apple made the great user experience advocate Larry Tesler their Chief Scientist. How services make a better experience for end users can be seen by the Caching service built into macOS (moved there from macOS Server) and how some products, such as Apple Remote Desktop, are still very much alive and kicking. But the focus on profile management and the desire to open up everything Profile Manager can do to third party developers who serve often niche markets or look more to scalability is certainly front and center. I think this story of the Apple Server offering is really much more about Apple branching into awesome areas that they needed to be at various points in time. Then having a constant focus on iterating to a better, newer offering. Growing with the market. Helping the market get to where they needed them to be. Serving the market and then when the needs of the market can be better served elsewhere, pulling back so other vendors could serve the market. Not looking to grow a billion dollar business unit in servers - but instead looking to provide them just until they didn’t need to. In many ways Apple paved the way for billion dollar businesses to host services. And the SaaS ecosystem is as vibrant for the Apple platform as ever. My perspective on this has changed a lot over the years. As someone who wrote a lot of books about the topic I might have been harsh at times. But that’s one great reason not to be judgmental. You don’t always know the full picture and it’s super-easy to miss big strategies like that when you’re in the middle of it. So thank you to Apple for putting user experience into servers as with everything you do. And thank you listeners for tuning into this episode of the History of Computing Podcast. We’re certainly lucky to have you and hope you join us next time!

Saying Farewell to Larry Tesler


Today we’re going to honor Larry Tesler, who died on February 17th, 2020. Larry Tesler is probably best known for early pioneering work on graphical user interfaces. He was the person that made up cut, copy, and paste as a term. Every time you say “just paste that in there,” you’re honoring his memory. I’ve struggled with how to write the episode or episodes about Xerox PARC. It was an amazing crucible of technical innovation. But they didn’t materialize huge commercial success for Xerox. Tesler was one of the dozens of people who contributed to that innovation. He studied with John McCarthy and other great pioneers at the Stanford Artificial Intelligence Laboratory in the 60s. What they called artificial intelligence back then we might call computer science today. Being in the Bay Area in the 60s, Tesler got active in war demonstrations and disappeared off to a commune in Oregon until he got offered a job by Alan Kay. You might remember Kay from earlier episodes as the one behind Smalltalk and the DynaBook. They’d both been at The Mother of All Demos where Doug Englebart showed the mouse, the first hyperlinks, and the graphical user interface and they’d been similarly inspired about the future of computing. So Tesler moves back down in 1970. I can almost hear Three Dog Night’s Mama Told Me Not To Come booming out of the 8track of his car stereo on the drive. Or hear Nixon and Kissinger on the radio talking about why they invaded Cambodia. So he gets to PARC and there’s a hiring freeze at Xerox, who after monster growth was starting to get crushed by bureaucracy, so was in a hiring freeze. Les Earnest from back at Stanford had him write one of the first markup language implementations, which he called Pub. That became the inspiration for Don Knuth’s TeX and Brian Reid’s Scribe and an ancestor of JavaScript and PHP. They find a way to pay him, basically bringing him on as a contractor. He works on Gypsy, the first real word processor. At the time, they’d figured out a way of using keystrokes to switch modes for documents. Think of how in vi or pico, you switch to a mode in order to insert or move, but they were applying metadata to an object, like making text bold or copying text from one part of a document to another. Those modes were terribly cumbersome and due to very simple mistakes, people would delete their documents. So he and Tim Mott started looking at ways to get rid of modes. That’s when they came up with the idea to make a copy and paste function. And to use the term cut, copy, and paste. Thee are now available in all “what you see is what you get” or WYSIWYG interfaces. Oh, he also coined that term while at PARC, although maybe not the acronym. And he became one of the biggest proponents of making software “user-friendly” when he was at PARC. By the way, that’s another term he coined, with relation to computing at least. He also seems to be the first to have used the term browser after building a browser for a friend to more easily write code. He’d go on to work on the Xerox Alto and NoteTaker. That team, which would be led by Adele Goldberg after Bob Taylor and then Alan Kay left PARC got a weird call to show these kids from Apple around. The scientists from PARC didn’t think much of these hobbyists but in 1979 despite Goldberg’s objections, Xerox management let the fox in the chicken coup when they let Steve Jobs and some other early Apple employees get a tour of PARC. Tesler would be one of the people giving Jobs a demo. And it’s no surprise that after watching Xerox not ship the Alto, that Tesler would end up at Apple 6 months later. After Xerox bonuses were distributed of course. At Apple, he’d help finish the Lisa. It cost far less than the Xerox Star, but it wouldn’t be until it went even further down-market to become the Macintosh that all of their hard work at Xerox and then Apple would find real success. Kay would become a fellow at Apple in 1984, as many of the early great pioneers left PARC. Tesler was the one that added object-oriented programming to Pascal, used to create the Lisa Toolkit and then he helped bring those into MacApp as class libraries for developing the Mac GUI. By 1990, Jobs had been out of Apple for 5 years and Tesler became the Vice President of the Newton project at Apple. He’d see Alan Kay’s concept of the digital assistant made into a reality. He would move into the role of Chief Scientist at Apple once the project was complete. There, he made his own mini-PARC but would shut down the group and leave after Apple entered their darkest age in 1997. Tesler had been a strong proponent, acting as the VP of AppleNet and pushing more advanced networking options prior to his departure. He would strike out and build Stagecast, a visual programming language that began life as an object-oriented teaching language called Cocoa. Apple would reuse the name Cocoa when they ported in OpenStep, so not the Cocoa many developers will remember or maybe even still use. Stagecast would run until Larry decided to join the executive team at Amazon. At Amazon, Larry was the VP of Shopping Experience and would start a group on usability, doing market research, usability research, and lots of data mining. He would stay there for 4 years before moving on to Yahoo!, spreading the gospel about user experience and design, managing up to 200 people at a time and embedding designers and researchers into product teams, a practice that’s become pretty common in UX. He would also be a fellow at Yahoo! before taking that role at 23 and me and ending his long and distinguished career as a consultant, helping make the world a better place. He conceptualized the Law of Conservation of Complexity, or Tesler’s Law, in 1984 states that “Every application has an inherent amount of irreducible complexity. The only question is: Who will have to deal with it—the user, the application developer, or the platform developer?” But One of my favorite quotes of his “I have been mistakenly identified as “the father of the graphical user interface for the Macintosh”. I was not. However, a paternity test might expose me as one of its many grandparents.” The first time I got to speak with him, he was quick to point out that he didn’t come up with much; he was simply carrying on the work started by Englebart. He was kind and patient with me. When Larry passed, we lost one of the founders of the computing world as we know it today. He lived and breathed user experience and making computers more accessible. That laser focus on augmenting human capabilities by making the inventions easier to use and more functional is probably what he’d want to be known for above all else. He was a good programmer but almost too empathetic not to end up with a focus on the experience of the devices. I’ll include a link to an episode he did on the 99% Invisible episode in the show notes if you want to hear more from him directly ( ). Everyone except the people who get royalties from White Out loved what he did for computing. He was a visionary and one of the people that ended up putting the counterculture into computing culture. He was a pioneer in User Experience and a great human. Thank you Larry for all you did for us. And thank you, listeners, in advance or in retrospect, for your contributions.



Today we’re going to look at an operating system from the 80s and 90s called OS/2. OS/2 was a bright shining light for a bit. IBM had a task force that wanted to build a personal computer. They’d been watching the hobbyists for some time and felt they could take off the shelf parts and build a PC. So they did.. But they needed an operating system. They reached out to Microsoft in 1980, who’d been successful with the Altair and so seemed a safe choice. By then, IBM had the IBM Entry Systems Division based out of their Boca Raton, Florida offices. The open architecture allowed them to ship fast. And it afforded them the chance to ship a computer with, check this out, options for an operating system. Wild idea, right? The options initially provided were CP/M and PC DOS, which was MS-DOS ported to the IBM open architecture. CP/M sold for $240 and PC DOS sold for $40. PC DOS had come from Microsoft’s acquisition of 86-DOS from Seattle Computer Products. The PC shipped in 1981, lightning fast for an IBM product. At the time Apple, Atari, Commodore, and were in control of the personal computer market. IBM had dominated the mainframe market for decades and once the personal computer market reached $100 million dollars in sales, it was time to go get some of that. And so the IBM PC would come to be an astounding success and make it not uncommon to see PCs on people’s desks at work or even at home. And being that most people didn’t know a difference, PC DOS would ship on most. By 1985 it was clear that Microsoft had entered and subsequently dominated the PC market. And it was clear that due to the open architecture that other vendors were starting to compete. And after 5 years of working together on PC DOS and 3 versions later, Microsoft and IBM signed a Joint Development Agreement and got to work on the next operating system. One they thought would change everything and set IBM PCs up to dominate the market for decades to come. Over that time, they’d noticed some gaps in DOS. One of the most substantial is that after the projects and files got too big, they became unwieldy. They wanted an object oriented operating system. Another is protected mode. The 286 chips from Intel had protected mode dating back to 1982 and IBM engineers felt they needed to harness that in order to get multi-tasking safely and harness virtual memory to provide better support for all these crazy new windowing things they’d learned with their GUI overlay to DOS called TOPview. So after the Joint Development agreement was signed , IBM let Ed Iacobucci lead the charge on their side and Microsoft had learned a lot from their attempts at a windowing operating system. The two organizations borrowed ideas from all the literature and Unix and of course the Mac. And really built a much better operating system than anything available at the time. Microsoft had been releasing Windows the whole time. Windows 1 came in 1985 and Windows 2 came in 1987, the same year OS/2 1.0 was released. In fact, one of the most dominant PC models to ever ship, the PS/2 computer, would ship that year as well. The initial release didn’t have a GUI. That wouldn’t come until version 1.1 nearly a year later in 1988. SNA shipped to interface with IBM mainframes in that release as well. And TCP/IP and Ethernet would come in version 1.2 in 1989. During this time, Microsoft steadily introduced new options in Windows and claimed both publicly and privately in meetings with IBM that OS/2 was the OS of the future and Windows would some day go away. They would release an extended edition that included a built-in database. Based on protected mode developers didn’t have to call the BIOS any more and could just use provided APIs. You could switch the foreground application using control-escape. In Windows that would become Alt-Tab. 1.2 brought the hpfs file system, bringing longer file names, a journaled file system to protect against data loss during crashes, and extended attributes, similar to how those worked on the Mac. But many of the features would ship in a version of Windows that would be released just a few months before. Like that GUI. Microsoft’s presentation manager came in Windows 2.1 just a few months before OS/2 1.1. Microsoft had an independent sales team. Every manufacturer that bundled Windows meant there were more drivers for Windows so a wider variety of hardware could be used. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. They started something similar to what we’d call a fork today of OS/2. And in 1988 they lured Dave Cutler from Digital who had been the architect of the VMS operating system. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler’s NT would replace all other operating systems in the family with the release of Windows 2000. But by 1990 when Microsoft released Windows 3 they sold millions of copies. Due to great OEM agreements they were on a lot of computers that people bought. The Joint Development Agreement would finally end. IBM had enough of what they assumed meant getting snowed by Microsoft. It took a couple of years for Microsoft to recover. In 1992, the war was on. Microsoft released Windows 3.1 and it was clear that they were moving ideas and people between the OS/2 and Windows teams. I mean, the operating systems actually looked a lot alike. TCP/IP finally shipped in Windows in 1992, 3 years after the companies had co-developed the feature for OS/2. But both would go 32 bit in 1992. OS /2 version 2.0 would also ship, bringing a lot of features. And both took off the blinders thinking about what the future would hold. Microsoft with Windows 95 and NT on parallel development tracks and IBM launched multiple projects to find a replacement operating system. They tried an internal project, Workstation OS, which fizzled. IBM did the unthinkable for Workplace OS. They entered into an alliance with Apple, taking on a number of Apple developers who formed what would be known as the Pink team. The Pinks moved into separate quarters and formed a new company called Taligent with Apple and IBM backing. Taligent planned to bring a new operating system to market in the mid-1990s. They would laser focus on PowerPC chips thus abandoning what was fast becoming the WinTel world. They did show Workspace OS at Comdex one year, but by then Bill Gates was all to swing by the booth knowing he’d won the battle. But they never shipped. By the mid-90s, Taligent would be rolled into IBM and focus on Java projects. Raw research that came out of the project is pretty pervasive today though. Those was an example of a forward looking project, though - and OS/2 continued to be developed with OS/2 Warp (or 3) getting released in 1994. It included IBM Works, which came with a word processor that wasn’t Microsoft Word, a spreadsheet that wasn’t Microsoft Excel, and a database that wasn’t Microsoft Access. Works wouldn’t last past 1996. After all, Microsoft had Charles Simony by then. He’d invented the GUI word processor at Xerox PARC and was light years ahead of the Warp options. And the Office Suite in general was gaining adoption fast. Warp was faster than previous releases, had way more options, and even browser support for early Internet adopters. But by then Windows 95 had taken the market by storm and OS/2 would see a rapidly declining customer base. After spending nearly a billion dollars a year on OS development, IBM would begin downsizing once the battle with Microsoft was lost. Over 1,300 people. And as the number of people dropped, defects with the code grew and the adoption dropped even faster. OS/2 would end in 2001. By then it was clear that IBM had lost the exploding PC market and that Windows was the dominant operating system in use. IBM’s control of the PC had slowly eroded and while they eeked out a little more profit from the PC, they would ultimately sell the division that built and marketed computers to Lenovo in 2005. Lenovo would then enjoy the number one spot in the market for a long time. The blue ocean had resulted in lower margins though, and IBM had taken a different, more services-oriented direction. OS/2 would live on. IBM discontinued support in 2006. It should have probably gone fully open source in 2005. It had already been renamed and rebranded as eComStation first by an IBM Business Partner called Serenity. It would go opensource(ish) and would be included in version two in 2010. Betas of 2.2 have been floating around since 2013 but as with many other open source compilations of projects, it seems to have mostly fizzled out. Ed Iacobucci would go on to found or co-found other companies, including Citrix, which flourishes to this day. So what really happened here. It would be easy, but an over-simplification to say that Microsoft just kinda’ took the operating system. IBM had a vision of an operating system that, similar to the Mac OS, would work with a given set of hardware. Microsoft, being an independent software developer with no hardware, would obviously have a different vision, wanting an operating system that could work with any hardware - you know, the original open architecture that allowed early IBM PCs to flourish. IBM had a big business suit and tie corporate culture. Microsoft did not. IBM employed a lot of computer scientists. Microsoft employed a lot of hackers. IBM had a large bureaucracy, Microsoft could build an operating system like NT mostly based on hiring a single brilliant person and rapidly building an elite team around them. IBM was a matrixed organization. I’ve been told you aren’t an enterprise unless you’re fully matrixed. Microsoft didn’t care about all that. They just wanted the marketshare. When Microsoft abandoned OS/2, IBM could have taken the entire PC market from them. But I think Microsoft knew that the IBM bureaucracy couldn’t react quickly enough at an extremely pivotal time. Things were moving so fast. And some of the first real buying tornados just had to be reacted to at lightning speeds. These days we have literature and those going through such things can bring in advisors or board members to help them. Like the roles Marc Andreeson plays with Airbnb and others. But this was uncharted territory and due to some good, shrewd and maybe sometimes downright bastardly decisions, Microsoft ended up leap-frogging everyone by moving fast, sometimes incurring technical debt that would take years to pay down, and grabbing the market at just the right time. I’ve heard this story oversimplified in one word: subterfuge. But that’s not entirely fair. When he was hired in 1993, Louis Gerstner pivoted IBM from a hardware and software giant into a leaner services organization. One that still thrives today. A lot of PC companies came and went. And the PC business infused IBM with the capital to allow the company to shoot from $29 billion in revenues to $168 billion just 9 years later. From the top down, IBM was ready to leave red oceans and focus on markets with fewer competitors. Microsoft was hiring the talent. Picking up many of the top engineers from the advent of interactive computing. And they learned from the failures of the Xeroxes and Digital Equipments and IBMs of the world and decided to do things a little differently. When I think of a few Microsoft engineers that just wanted to build a better DOS sitting in front of a 60 page refinement of how a feature should look, I think maybe I’d have a hard time trying to play that game as well. I’m all for relentless prioritization. And user testing features and being deliberate about what you build. But when you see a limited window, I’m OK acting as well. That’s the real lesson here. When the day needs seizing, good leaders will find a way to blow up the establishment and release the team to go out and build something special. And so yah, Microsoft took the operating system market once dominated by CP/M and with IBM’s help, established themselves as the dominant player. And then took it from IBM. But maybe they did what they had to do… Just like IBM did what they had to do, which was move on to more fertile hunting grounds for their best in the world sales teams. So tomorrow, think of bureaucracies you’ve created or had created to constrain you. And think of where they are making the world better vs where they are just giving some controlling jackrabbit a feeling of power. And then go change the world. Because that is what you were put on this planet to do. Thank you so much for listening in to this episode of the history of computing podcast. We are so lucky to have you.

The Mouse


In a world of rapidly changing technologies, few have lasted as long is as unaltered a fashion as the mouse. The party line is that the computer mouse was invente d by Douglas Engelbart in 1964 and that it was a one-button wooden device that had two metal wheels. Those used an analog to digital conversion to input a location to a computer. But there’s a lot more to tell. Englebart had read an article in 1945 called “As We May Think” by Vannevar Bush. He was in the Philippines working as a radio and radar tech. He’d return home,. Get his degree in electrical engineering, then go to Berkeley and get first his masters and then a PhD. Still in electrical engineering. At the time there were a lot of military grants in computing floating around and a Navy grant saw him work on a computer called CALDIC, short for the California Digital Computer. By the time he completed his PhD he was ready to start a computer storage company but ended up at the Stanford Research Institute in 1957. He published a paper in 1962 called Augmenting Human Intellect: A Conceptual Framework. That paper would guide the next decade of his life and help shape nearly everything in computing that came after. Keeping with the theme of “As We May Think” Englebart was all about supplementing what humans could do. The world of computer science had been interested in selecting things on a computer graphically for some time. And Englebart would have a number of devices that he wanted to test in order to find the best possible device for humans to augment their capabilities using a computer. He knew he wanted a graphical system and wanted to be deliberate about every aspect in a very academic fashion. And a key aspect was how people that used the system would interact with it. The keyboard was already a mainstay but he wanted people pointing at things on a screen. While Englebart would invent the mouse, pointing devices certainly weren’t new. Pilots had been using the joystick for some time, but an electrical joystick had been developed at the US Naval Research Laboratory in 1926, with the concept of unmanned aircraft in mind. The Germans would end up building one in 1944 as well. But it was Alan Kotok who brought the joystick to the computer game in the early 1960s to play spacewar on minicomputers. And Ralph Baer brought it into homes in 1967 for an early video game system, the Magnavox Odyssey. Another input device that had come along was the trackball. Ralph Benjamin of the British Royal Navy’s Scientific Service invented the trackball, or ball tracker for radar plotting on the Comprehensive Display System, or CDS. The computers were analog at the time but they could still use the X-Y coordinates from the trackball, which they patented in 1947. Tom Cranston, Fred Longstaff and Kenyon Taylor had seen the CDS trackball and used that as the primary input for DATAR, a radar-driven battlefield visualization computer. The trackball stayed in radar systems into the 60s, when Orbit Instrument Corporation made the X-Y Ball Tracker and then Telefunken turned it upside down to control the TR 440, making an early mouse type of device. The last of the options Englebart decided against was the light pen. Light guns had shown up in the 1930s when engineers realized that a vacuum tube was light-sensitive. You could shoot a beam of light at a tube and it could react. Robert Everett worked with Jay Forrester to develop the light pen, which would allow people to interact with a CRT using light sensing to cause an interrupt on a computer. This would move to the SAGE computer system from there and eek into the IBM mainframes in the 60s. While the technology used to track the coordinates is not even remotely similar, think of this as conceptually similar to the styluses used with tablets and on Wacom tablets today. Paul Morris Fitts had built a model in 1954, now known as Fitts’s Law, to predict the time that’s required to move things on a screen. He defined the target area as a function of the ratio between the distance to the target and the width of the target. If you listen to enough episodes of this podcast, you’ll hear a few names repeatedly. One of those is Claude Shannon. He brought a lot of the math to computing in the 40s and 50s and helped with the Shannon-Hartley Theorum, which defined information transmission rates over a given medium. So these were the main options at Englebart’s disposal to test when he started ARC. But in looking at them, he had another idea. He’d sketched out the mouse in 1961 while sitting in a conference session about computer graphics. Once he had funding he brought in Bill English to build a prototype I n 1963. The first model used two perpendicular wheels attached to potentiometers that tracked movement. It had one button to select things on a screen. It tracked x,y coordinates as had previous devices. NASA funded a study to really dig in and decide which was the best device. He, Bill English, and an extremely talented team, spent two years researching the question, publishing a report in 1965. They really had the blinders off, too. They looked at the DEC Grafacon, joysticks, light pens and even what amounts to a mouse that was knee operated. Two years of what we’d call UX research or User Research today. Few organizations would dedicate that much time to study something. But the result would be patenting the mouse in 1967, an innovation that would last for over 50 years. I’ve heard Engelbart criticized for taking so long to build the oNline System, or NLS, which he showcased at the Mother of All Demos. But it’s worth thinking of his research as academic in nature. It was government funded. And it changed the world. His paper on Computer-Aided Display Controls was seminal. Vietnam caused a lot of those government funded contracts to dry up. From there, Bill English and a number of others from Stanford Research Institute which ARC was a part of, moved to Xerox PARC. English and Jack Hawley iterated and improved the technology of the mouse, ditching the analog to digital converters and over the next few years we’d see some of the most substantial advancements in computing. By 1981, Xerox had shipped the Alto and the Star. But while Xerox would be profitable with their basic research, they would miss something that a candle-clad hippy wouldn’t. In 1979, Xerox let Steve Jobs make three trips to PARC in exchange for the opportunity to buy 100,000 shares of Apple stock pre-IPO. The mouse by then had evolved to a three button mouse that cost $300. It didn’t roll well and had to be used on pretty specific surfaces. Jobs would call Dean Hovey, a co-founder of IDEO and demand they design one that would work on anything including quote “blue jeans.” Oh, and he wanted it to cost $15. And he wanted it to have just one button, which would be an Apple hallmark for the next 30ish years. Hovey-Kelley would move to optical encoder wheels, freeing the tracking ball to move however it needed to and then use injection molded frames. And thus make the mouse affordable. It’s amazing what can happen when you combine all that user research and academic rigor from Englebarts team and engineering advancements documented at Xerox PARC with world-class industrial design. You see this trend played out over and over with the innovations in computing that are built to last. The mouse would ship with the LISA and then with the 1984 Mac. Logitech had shipped a mouse in 1982 for $300. After leaving Xerox, Jack Howley founded a company to sell a mouse for $400 the same year. Microsoft released a mouse for $200 in 1983. But Apple changed the world when Steve Jobs demanded the mouse ship with all Macs. The IBM PC would ;use a mouse and from there it would become ubiquitous in personal computing. Desktops would ship with a mouse. Laptops would have a funny little button that could be used as a mouse when the actual mouse was unavailable. The mouse would ship with extra buttons that could be mapped to additional workflows or macros. And even servers were then outfitted with switches that allowed using a device that switched the keyboard, video, and mouse between them during the rise of large server farms to run the upcoming dot com revolution. Trays would be put into most racks with a single u, or unit of the rack being used to see what you’re working on; especially after Windows or windowing servers started to ship. As various technologies matured, other innovations came along to input devices. The mouse would go optical in 1980 and ship with early Xerox Star computers but what we think of as an optical mouse wouldn’t really ship until 1999 when Microsoft released the IntelliMouse. Some of that tech came to them via Hewlett-Packard through the HP acquisition of DEC and some of those same Digital Research Institute engineers had been brought in from the original mainstreamer of the mouse, PARC when Bob Taylor started DRI. The LED sensor on the muse stuck around. And thus ended the era of the mouse pad, once a hallmark of many a marketing give-away. Finger tracking devices came along in 1969 but were far too expensive to produce at the time. As capacitive sensitive pads, or trackpads came down in price and the technology matured those began to replace the previous mouse-types of devices. The 1982 Apollo computers were the first to ship with a touchpad but it wasn’t until Synaptics launched the TouchPad in 1992 that they began to become common, showing up in 1995 on Apple laptops and then becoming ubiquitous over the coming years. In fact, the IBM Thinkpad and many others shipped laptops with little red nubs in the keyboard for people that didn’t want to use the TouchPad for awhile as well. Some advancements in the mouse didn’t work out. Apple released the hockey puck shaped mouse in 1998, when they released the iMac. It was USB, which replaced the ADB interface. USB lasted. The shape of the mouse didn’t. Apple would go to the monolithic surface mouse in 2000, go wireless in 2003 and then release the Mighty Mouse in 2005. The Mighty Mouse would have a capacitive touch sensor and since people wanted to hear a click would produce that with a little speaker. This also signified the beginning of bluetooth as a means of connecting a mouse. Laptops began to replace desktops for many, and so the mouse itself isn’t as dominant today. And with mobile and tablet computing, resistive touchscreens rose to replace many uses for the mouse. But even today, when I edit these podcasts, I often switch over to a mouse simply because other means of dragging around timelines simply aren’t as graceful. And using a pen, as Englebart’s research from the 60s indicated, simply gets fatiguing. Whether it’s always obvious, we have an underlying story we’re often trying to tell with each of these episodes. We obviously love unbridled innovation and a relentless drive towards a technologically utopian multiverse. But taking a step back during that process and researching what people want means less work and faster adoption. Doug Englebart was a lot of things but one net-new point we’d like to make is that he was possibly the most innovative in harnessing user research to make sure that his innovations would last for decades to come. Today, we’d love to research every button and heat map and track eyeballs. But remembering, as he did, that our job is to augment human intellect, is best done when we make our advances useful, helps to keep us and the forks that occur in technology from us, from having to backtrack decades of work in order to take the next jump forward. We believe in the reach of your innovations. So next time you’re working on a project. Save yourself time, save your code a little cyclomatic complexity, , and save users frustration from having to relearn a whole new thing. And research what you’re going to do first. Because you never know. Something you engineer might end up being touched by nearly every human on the planet the way the mouse has. Thank you Englebart. And thank you to NASA and Bob Roberts from ARPA for funding such important research. And thank you to Xerox PARC, for carrying the torch. And to Steve Jobs for making the mouse accessible to every day humans. As with many an advance in computing, there are a lot of people that deserve a little bit of the credit. And thank you listeners, for joining us for another episode of the history of computing podcast. We’re so lucky to have you. Now stop consuming content and go change the world.

Happy Birthday ENIAC


Today we’re going to celebrate the birthday of the first real multi-purpose computer: the gargantuan ENIAC which would have turned 74 years old today, on February 15th. Many generations ago in computing. The year is 1946. World War II raged from 1939 to 1945. We’d cracked Enigma with computers and scientists were thinking of more and more ways to use them. The press is now running articles about a “giant brain” built in Philadelphia. The Electronic Numerical Integrator and Computer was a mouthful, so they called it ENIAC. It was the first true electronic computer. Before that there were electromechanical monstrosities. Those had to physically move a part in order to process a mathematical formula. That took time. ENIAC used vacuum tubes instead. A lot of them. To put things in perspective: very hour of processing by the ENiAC was worth 2,400 hours of work calculating formulas by hand. And it’s not like you can do 2,400 hours in parallel between people or in a row of course. So it made the previous almost impossible, possible. Sure, you could figure out the settings to fire a bomb where you wanted two bombs to go in a minute rather than about a full day of running calculations. But math itself, for the purposes of math, was about to get really, really cool. The Bush Differential Analyzer, a later mechanical computer, had been built in the basement of the building that is now the ENIAC museum. The University of Pennsylvania ran a class on wartime electronics, based on their experience with the Differential Analyzer. John Mauchly and J. Presper Eckert met in 1941 while taking that class, a topic that had included lots of shiny new or newish things like radar and cryptanalysis. That class was mostly on ballistics, a core focus at the Moore School of Electrical Engineering at the University of Pennsylvania. More accurate ballistics would be a huge contribution to the war effort. But Echert and Mauchly wanted to go further, building a multi-purpose computer that could analyze weather and calculate ballistics. Mauchly got all fired up and wrote a memo about building a general purpose computer. But the University shot it down. And so ENIAC began life as Project PX when Herman Goldstine acted as the main sponsor after seeing their proposal and digging it back up. Mauchly would team up with Eckert to design the computer and the effort was overseen and orchestrated by Major General Gladeon Barnes of the US Army Ordnance Corps. Thomas Sharpless was the master programmer. Arthur Burkes built the multiplier. Robert Shaw designed the function tables. Harry Huskey designed the reader and the printer. Jeffrey Chu built the dividers. And Jack Davis built the accumulators. Ultimately it was just a really big calculator and not a computer that ran stored programs in the same way we do today. Although ENIAC did get an early version of stored programming that used a function table for read only memory. The project was supposed to cost $61,700. The University of Pennsylvania Department of Computer and Information Science in Philadelphia actually spent half a million dollars worth of metal, tubes and wires. And of course the scientists weren’t free. That’s around $6 and a half million worth of cash today. And of course it was paid for by the US Army. Specifically the Ballistic Research Laboratory. It was designed to calculate firing tables to make blowing things up a little more accurate. Herman Goldstine chose a team of programmers that included Betty Jennings, Betty Snyder, Kay McNulty, Fran Bilas, Marlyn Meltzer, and Ruth Lichterman. They were chosen from a pool of 200 and set about writing the necessary formulas for the machine to process the requirements provided from people using time on the machine. In fact, Kay McNulty invented the concept of subroutines while working on the project. They would flip switches and plug in cables as a means of programming the computer. And programming took weeks of figuring up complex calculations on paper. . Then it took days of fiddling with cables, switches, tubes, and panels to input the program. Debugging was done step by step, similar to how we use break points today. They would feed ENIAC input using IBM punch cards and readers. The output was punch cards as well and these punch cards acted as persistent storage. The machine then used standard octal radio tubes. 18000 tubes and they ran at a lower voltage than they could in order to minimize them blowing out and creating heat. Each digit used in calculations took 36 of those vacuum tubes and 20 accumulators that could run 5,000 operations per second. The accumulators used two of those tubes to form a flip-flop and they got them from the Kentucky Electrical Lamp Company. Given the number that blew every day they must have loved life until engineers got it to only blowing a tube every couple of days. ENIAC was modular computer and used different panels to perform different tasks, or functions. It used ring counters with 10 positions for a lot of operations making it a digital computer as opposed to the modern binary computational devices we have today. The pulses between the rings were used to count. Suddenly computers were big money. A lot of research had happened in a short amount of time. Some had been government funded and some had been part of corporations and it became impossible to untangle the two. This was pretty common with technical advances during World War II and the early Cold War years. John Atanasoff and Cliff Berry had ushered in the era of the digital computer in 1939 but hadn’t finished. Maunchly had seen that in 1941. It was used to run a number of calculations for the Manhattan Project, allowing us to blow more things up than ever. That project took over a million punch cards and took precedent over artillery tables. Jon Von Neumann worked with a number of mathematicians and physicists including Stanislaw Ulam who developed the Monte Method. That led to a massive reduction in programming time. Suddenly programming became more about I/O than anything else. To promote the emerging computing industry, the Pentagon had the Moore School of Electrical Engineering at The University of Pennsylvania launch a series of lectures to further computing at large. These were called the Theory and Techniques for Design of Electronic Digital Computers, or just the Moore School Lectures for short. The lectures focused on the various types of circuits and the findings from Eckert and Mauchly on building and architecting computers. Goldstein would talk at length about math and other developers would give talks, looking forward to the development of the EDVAC and back at how they got where they were with ENIAC. As the University began to realize the potential business impact and monetization, they decided to b