'history' Episodes

Claude Shannon and the Origins of Information Theory

     9/9/2020

The name Claude Shannon has come up 8 times so far in this podcast. More than any single person. We covered George Boole and the concept that Boolean is a 0 and a 1 and that using Boolean algebra, you can abstract simple circuits into practically any higher level concept. And Boolean algebra had been used by a number of mathematicians, to perform some complex tasks. Including by Lewis Carroll in Through The Looking Glass to make words into math. 

And binary had effectively been used in morse code to enable communications over the telegraph. 

But it was Claude Shannon who laid the foundation for making a theory that took both the concept of communicating over the telegraph and applying Boolean algebra to get to a higher level of communication possible. And it all starts with bits, which we can thank Shannon for. 

Shannon grew up in Gaylord, Michigan. His mother was a high school principal and his grandfather had been an inventor. He built a telegraph as a child, using a barbed wire fence. But barbed wire isn’t the greatest conducer of electricity and so… noise. And thus information theory began to ruminate in his mind. He went off to the University of Michigan and got a Bachelors in electrical engineering and another in math. A perfect combination for laying the foundation of the future. 

And he got a job as a research assistant to Vannevar Bash, who wrote the seminal paper, As We May Think. At that time, Bush was working at MIT on The Thinking Machine, or Differential Analyzer. This was before World War II and they had no idea, but their work was about to reshape everything.  At the time, what we think of as computers today, were electro-mechanical. They had gears that were used for the more complicated tasks, and switches, used for simpler tasks. 

Shannon devoted his masters thesis to applying Boolean algebra, thus getting rid of the wheels, which moved slowly, and allowing the computer to go much faster. He broke down Boole’s Laws of Thought into a manner it could be applied to parallel circuitry. That paper was called A Symbolic Analysis of Relay and Switching Circuits in 1937 and helped set the stage for the Hackers revolution that came shortly thereafter at MIT. 

At the urging of Vannevar Bush, he got his PhD in Biology, pushing genetics forward by theorizing that you could break the genetic code down into a matrix. The structure of DNA would be discovered by George Gamow in 1953 and Watson and Crick would discover the helix and Rosalind Franklin would use X-ray crystallography to capture the first photo of the structure. 

He headed off to Princeton in 1940 to work at the Institute for Advanced Study, where Einstein and von Neumann were. He quickly moved over to the National Defense Research Committee, as the world was moving towards World War II. A lot of computing was going into making projectiles, or bombs, more accurate. He co-wrote a paper called Data Smoothing and Prediction in Fire-Control Systems during the war. 

He’d gotten a primer in early cryptography, reading The Gold-Bug by Edgar Allan Poe as a kid. And it struck his fancy. So he started working on theories around cryptography, everything he’d learned forming into a single theory. He would have lunch with Alan Turning during the war. He would And it was around this work that he first coined the term “information theory” in 1945.

A universal theory of communication gnawed at him and formed during this time, from the Institute, to the National Defense Research Committee, to Bell Labs, where he helped encrypt communications between world leaders. He hid it from everyone, including failed relationships. He broke information down into the smallest possible unit, a bit, short for a binary digit. He worked out how to compress information that was most repetitive. Similar to how morse code compressed the number of taps on the electrical wire by making the most common letters the shortest to send. Eliminating redundant communications established what we now call compression. 

Today we use the term lossless compression frequently in computing. He worked out that the minimum amount of information to send would be H = - Sigma Pi log2 Pi - or entropy. 

His paper, put out while he was at Bell, was called “A mathematical theory or communication” and came out in 1948. You could now change any data to a zero or a one and then compress it. Further, he had to find a way to calculate the maximum amount of information that could be sent over a communication channel before it became garbled, due to loss. We now call this the Shannon Limit. And so once we have that, he derived how to analyze information with math to correct for noise. That barbed wire fence could finally be useful. This would be used in all modern information connectivity. For example, when I took my Network+ we spent an inordinate amount of time learning about Carrier-sense multiple access with collision detection (CSMA/CD) - a media access control (MAC) method that used carrier-sensing to defer transmissions until no other stations are transmitting.

And as his employer, Bell Labs helped shape the future of computing. Along with Unix, C, C++, the transistor, the laser, information theory is a less tangible yet given what we all have in our pockets on on our wrists these days, more tangible discovery. Having mapped the limits, Bell started looking to reach the limit. And so the digital communication age was born when the first modem would come out of his former employer, Bell Labs, in 1958. And just across the way in Boston, ARPA would begin working on the first Interface Message Processor in 1967, the humble beginnings of the Internet.

His work done, he went back to MIT. His theories were applied to all sorts of disciplines. But he comes in less and less. Over time we started placing bits on devices. We started retrieving those bits. We started compressing data. Digital images, audio, and more. It would take 35 or so years 

He consulted with the NSA on cryptography. In 1949 he published Communication Theory of Secrecy Systems,  pushed cryptography to the next level. His paper Prediction and Entropy of Printed English in 1951 practically created the field of natural language processing, which evolved into various branches of machine learning. He helped give us the Nyquist–Shannon sampling theorem, used in aliasing, deriving maximum throughput, RGB, and of course signal to noise. 

He loved games. In 1941 he theorized the Shannon Number, or the game-tree complexity of chess. In case you’re curious, the reason deep blue can win at chess is that it can brute force 10 to the 120th power. His love of games continued and in 1949 he presented Programming a Computer for Playing Chess. That was the first time we thought about computers playing chess. And he’d have a standing bet that a computer would beat a human grand master at chess by 2001. Garry Kasparov lost to Deep Blue in 1997.

That curiosity extended far beyond chess. He would make Theseus in 1950 - a maze with a mouse that learned how to escape, using relays from phone switches. One of the earliest forms of machine learning. In 1961 he would co-invent the first wearable computer to help win a game of roulette. That same year he designed the Minivan 601 to help teach how computers worked. 

So we’ll leave you with one last bit of information. Shannon’s maxim is that “the enemy knows the system.” I used to think it was just a shortened version of Kerckhoffs's principle, which is that it should be possible to understand a cryptographic system, for example, modern public key ciphers, but not be able to break the encryption without a private key. Thing is, the more I know about Shannon the more I suspect that what he was really doing was giving the principle a broader meaning. So think about that as you try and decipher what is and what is not disinformation in such a noisy world. 

Lots and lots of people would cary on the great work in information theory. Like Kullback–Leibler divergence, or relative entropy. And we owe them all our thanks. But here’s the thing about Shannon: math. He took things that could have easily been theorized - and he proved them. Because science can refute disinformation. If you let it. 


A Retrospective On Google, On Their 22nd Birthday

     9/4/2020

We are in strange and uncertain times. The technology industry has always managed to respond to strange and uncertain times with incredible innovations that lead to the next round of growth. Growth that often comes with much higher rewards and leaves the world in a state almost unimaginable in previous iterations. The last major inflection point for the Internet, and computing in general, was when the dot come bubble burst. 

The companies that survived that time in the history of computing and stayed true to their course sparked the Web 2.0 revolution. And their shareholders were rewarded by going from exits and valuations in the millions in the dot com era, they went into the billions in the Web 2.0 era. None as iconic as Google. They finally solved how to make money at scale on the Internet and in the process validated that search was a place to do so.

Today we can think of Google, or the resulting parent Alphabet, as a multi-headed hydra. The biggest of those heads includes Search, which includes AdWords and AdSense. But Google has long since stopped being a one-trick pony. They also include Google Apps, Google Cloud, Gmail, YouTube, Google Nest, Verily, self-driving cars, mobile operating systems, and one of the more ambitious, Google Fiber. But how did two kids going to Stanford manage to become the third US company to be valued at a trillion dollars?

Let’s go back to 1998. The Big Lebowski, Fear and Loathing in Las Vegas, There’s Something About Mary, The Truman Show, and Saving Private Ryan were in the theaters. Puff Daddy hadn’t transmogrified into P Diddy. And Usher had three songs in the Top 40. Boyz II Men, Backstreet Boys, Shania Twain, and Third Eye Blind couldn’t be avoided on the airwaves. They’re now pretty much relegated to 90s disco nights. But technology offered a bright spot. We got the first MP3 player, the Apple Newton, the Intel Celeron and Xeon, the Apple iMac, MySQL, v.90 Modems, StarCraft, and two Stanford students named Larry Page and Sergey Brin took a research project they started in 1996 with Scott Hassan, and started a company called Google (although Hassan would leave Google before it became a company). 

There were search engines before Page and Brin. But most produced search results that just weren’t that great. In fact, most were focused on becoming portals. They took their queue from AOL and other ISPs who had springboarded people onto the web from services that had been walled gardens. As they became interconnected into a truly open Internet, the amount of diverse content began to explode and people just getting online found it hard to actually find things they were interested in. Going from ISPs who had portals to getting on the Internet, many began using a starting page like Archie, LYCOS, Jughead, Veronica, Infoseek, and of course Yahoo!

Yahoo! Had grown fast out of Stanford, having been founded by Jerry Yang and David Filo. By 1998, the Yahoo! Page was full of text. Stock tickers, links to shopping, and even horoscopes. It took a lot of the features from the community builders at AOL. The model to take money was banner ads and that meant keeping people on their pages. Because it wasn’t yet monetized and in fact acted against the banner loading business model, searching for what you really wanted to find on the Internet didn’t get a lot of love. The search engines or portals of the day had pretty crappy search engines compared to what Page and Brin were building. 

They initially called the search engine BackRub back in 1996. As academics (and the children of academics) they knew that the more papers that sited another paper, the more valuable the paper was. Applying that same logic allowed them to rank websites based on how many other sites linked into it. This became the foundation of the original PageRank algorithm, which continues to evolve today. The name BackRub came from the concept of weighting based on back links. That concept had come from a tool called RankDex, which was developed by Robin Li who went on to found Baidu. 

Keep in mind, it started as a research project. The transition from research project meant finding a good name. Being math nerds they landed on "Google" a play on "googol", or a 1 followed by a hundred zeros.

And within a year they were still running off University of Stanford computers. As their crawlers searched the web they needed more and more computing time. So they went out looking for funding and in 1998 got $100,000 from Sun Microsystems cofounder Andy Bechtolsheim. Jeff Bezos from Amazon, David Cheriton, Ram Shriram and others kicked in some money as well and they got a million dollar round of angel investment. And their algorithm kept getting more and more mature as they were able to catalog more and more sites. By 1999 they went out and raised $25 million from Kleiner Perkins and Sequoia Capital, insisting the two invest equally, which hadn’t been done. 

They were frugal with their money, which allowed them to weather the coming storm when the dot com bubble burst. They build computers to process data using off the shelf hardware they got at Fry’s and other computer stores, they brought in some of the best talent in the area as other companies were going bankrupt. 

They also used that money to move into offices in Palo Alto and in 2000 started selling ads through a service they called AdWords. It was a simple site and ads were text instead of the banners popular at the time. It was an instant success and I remember being drawn to it after years of looking at that increasingly complicated Yahoo! Landing page. And they successfully inked a deal with Yahoo! to provide organic and paid search, betting the company that they could make lots of money. And they were right. The world was ready for simple interfaces that provided relevant results. And the results were relevant for advertisers who could move to a pay-per-click model and bid on how much they wanted to pay for each click. They could serve ads for nearly any company and with little human interaction because they spent the time and money to build great AI to power the system. You put in a credit card number and they got accurate projections on how successful an ad would be. In fact, ads that were relevant often charged less for clicks than those that weren’t. And it quickly became apparent that they were just printing money on the back of the new ad system.

They brought in Eric Schmidt to run the company, per the agreement they made when they raised the $25 million and by 2002 they were booking $400M in revenue. And they operated at a 60% margin. These are crazy numbers and enabled them to continue aggressively making investments. The dot com bubble may have burst, but Google was a clear beacon of light that the Internet wasn’t done for.

In 2003 Google moved into a space now referred to as the Googleplex, in Mountain View California. In a sign of the times, that was land formerly owned by Silicon Graphics. They saw how the ad model could improved beyond paid placement and banners and acquired  is when they launched AdSense. They could afford to with $1.5 billion in revenue. 

Google went public in 2004, with revenues of $3.2 billion. Underwritten by Morgan Stanley and Credit Suisse, who took half the standard fees for leading the IPO, Google sold nearly 20 million shares. By then they were basically printing money. By then the company had a market cap of $23 billion, just below that of Yahoo. That’s the year they acquired Where 2 Technologies to convert their mapping technology into Google Maps, which was launched in 2005. They also bought Keyhole in 2004, which the CIA had invested in, and that was released as Google Earth in 2005. That technology then became critical for turn by turn directions and the directions were enriched using another 2004 acquisition, ZipDash, to get real-time traffic information. At this point, Google wasn’t just responding to queries about content on the web, but were able to respond to queries about the world at large. They also released Gmail and Google Books in 2004.

By the end of 2005 they were up to $6.1 billion in revenue and they continued to invest money back into the company aggressively, looking not only to point users to pages but get into content. That’s when they bought Android in 2005, allowing them to answer queries using their own mobile operating system rather than just on the web. On the back of $10.6 billion in revenue they bought YouTube in 2006 for $1.65 billion in Google stock. This is also when they brought Gmail into Google Apps for Your Domain, now simply known as G Suite - and when they acquired Upstartle to get what we now call Google Docs. 

At $16.6 billion in revenues, they bought DoubleClick in 2007 for $3.1 billion to get the relationships DoubleClick had with the ad agencies. 

They also acquired Tonic Systems in 2007, which would become Google Slides. Thus completing a suite of apps that could compete with Microsoft Office. By then they were at $16.6 billion in revenues.

The first Android release came in 2008 on the back of $21.8 billion revenue. They also released Chrome that year, a project that came out of hiring a number of Mozilla Firefox developers, even after Eric Schmidt had stonewalled doing so for six years. The project had been managed by up and coming Sundar Pichai. That year they also released Google App Engine, to compete with Amazon’s EC2. 

They bought On2, reCAPTCHA, AdMob, VOIP company Gizmo5, Teracent, and AppJet in 2009 on $23.7 Billion in revenue and Aardvark, reMail, Picnic, DocVerse, Episodic, Plink, Agnilux, LabPixies, BumpTop, Global IP Solutions, Simplify Media, Ruba.com, Invite Media, Metaweb, Zetawire, Instantiations, Slide.com, Jambool, Like.com, Angstro, SocialDeck, QuickSee, Plannr, BlindType, Phonetic Arts, and Widevine Technologies in 2010 on 29.3 billion in revenue.

In 2011, Google bought Motorola Mobility for $12.5 billion to get access to patents for mobile phones, along with another almost two dozen companies. This was on the back of nearly $38 billion in revenue. 

The battle with Apple intensified when Apple removed Google Maps from iOS 6 in 2012. But on $50 billion in revenue, Google wasn’t worried. They released the Chromebook in 2012 as well as announcing Google Fiber to be rolled out in Kansas City. 

They launched Google Drive They bought Waze for just shy of a billion dollars in 2013 to get crowdsourced data that could help bolster what Google Maps was doing. That was on 55 and a half billion in revenue. 

In 2014, at $65 billion in revenue, they bought Nest, getting thermostats and cameras in the portfolio. 

Pichai, who had worked in product on Drive, Gmail, Maps, and Chromebook took over Android and by 2015 was named the next CEO of Google when Google restructured with Alphabet being created as the parent of the various companies that made up the portfolio. By then they were up to 74 and a half billion in revenue. And they needed a new structure, given the size and scale of what they were doing. 

In 2016 they launched Google Home, which has now brought AI into 52 million homes. They also bought nearly 20 other companies that year, including Apigee, to get an API management platform. By then they were up to nearly $90 billion in revenue.

2017 saw revenues rise to $110 billion and 2018 saw them reach $136 billion. 

In 2019, Pichai became the CEO of Alphabet, now presiding over a company with over $160 billion in revenues. One that has bought over 200 companies and employs over 123,000 humans. Google’s mission is “to organize the world's information and make it universally accessible and useful” and it’s easy to connect most of the acquisitions with that goal.

I have a lot of friends in and out of IT that think Google is evil. Despite their desire not to do evil, any organization that grows at such a mind-boggling pace is bound to rub people wrong here and there. I’ve always gladly using their free services even knowing that when you aren’t paying for a product, you are the product. We have a lot to be thankful of Google for on this birthday. As Netscape was the symbol of the dot com era, they were the symbol of Web 2.0. They took the mantle for free mail from Hotmail after Microsoft screwed the pooch with that. 

They applied math to everything, revolutionizing marketing and helping people connect with information they were most interested in. They cobbled together a mapping solution and changed the way we navigate through cities. They made Google Apps and evolved the way we use documents, making us more collaborative and forcing the competition, namely Microsoft Office to adapt as well. They dominated the mobility market, capturing over 90% of devices. They innovated cloud stacks. And here’s the crazy thing, from the beginning, they didn’t make up a lot. They borrowed the foundational principals of that original algorithm from RankDex, Gmail was a new and innovative approach to Hotmail, Google Maps was a better Encarta, their cloud offerings were structured similar to those of Amazon. And the list of acquisitions that helped them get patents or talent or ideas to launch innovative services is just astounding. 

Chances are that today you do something that touches on Google. Whether it’s the original search, controlling the lights in your house with Nest, using a web service hosted in their cloud, sending or receiving email through Gmail or one of the other hundreds of services. The team at Google has left an impact on each of the types of services they enable. They have innovated business and reaped the rewards. And on their 22nd birthday, we all owe them a certain level of thanks for everything they’ve given us.

So until next time, think about all the services you interact with. And think about how you can improve on them. And thank you, for tuning in to this episode of the history of computing podcast. 


Iran and Stuxnet

     1/24/2020

Attacking Iran with Stuxnet Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover Stuxnet, which we now considered the first real act of cyber warfare. Iran has arguably been in turmoil since the fall of the Persian empire. Alexander the Great conquered Iran in 336 BC and then the Macedonians ruled until the empire fragmented and one arm, the Seleucids ruled until the Parthians took it in 129BC. Then the Sasanians, of Persian descent, ruled until the Muslim conquest of Persia in 651. The region was then ruled by a collection of Muslim dynasties until this weirdo Ghengis Khan showed up around 1220. After a few decades the Muslim forces regained control in 1256 and the area returned to turning over to different Muslim dynasties every couple hundred years on average until 1925 when the Pahlavi took control. The final Shah of that regime was ousted during the Islamic Revolution in Iran in 1979. Ruholla Khomeini ruled for the first ten years until Sayyid Ali Hosseini Khameneh took over after his death in 1989. Something very important happened the year before that would shape Iran up until today. In 1988 Pakistan became a nuclear power. Iran started working toward a nuclear program shortly thereafter, buying equipment from Pakistan. Those centrifuges would be something those, including the US, would attempt to keep out of Iranian hands through to today. While you can argue the politics of that, those are the facts. Middle Eastern politics, wars over oil, and wars over territory have all ensued. In 2015, Iran reached agreement on the Joint Comprehensive Plan of Action, commonly referred to as the Iran nuclear deal, with the US and the EU, and their nuclear ambitions seemed to be stalled until US president Donald Trump pulled out of it. A little before the recording of this episode General Sullemani was killed by a US attack. One of the reasons negotiated the JCPA was that the Iranians received a huge setback in their nuclear program in 2010 when the US attacked an Iranian nuclear facility. It’s now the most Well researched computer worm. But Who was behind stuxnet? Kim Zetter took a two year journey researching the worm, now documented in her book Countdown to 0 day. The Air Force was created in 1947. In the early 2000s, advanced persistent threat, or APTs, began to emerge following Operation Eligible Receiver in 1997. These are pieces of malware that are specifically crafted to attack specific systems or people. Now that the field was seen as a new frontier of war, the US Cyber command was founded in 2009. And they developed weapons to attack SCADA systems, or supervisory control and data acquisition (SCADA) systems amongst other targets. By the mid-2000s, Siemens has built these industrial control systems. The Marrucci incident had brought these systems to light as targets and developers had not been building these systems with security in mind, making them quite juicy targets. So the US and Israel wrote some malware that destroyed centrifuges by hitting the Siemens software sitting on windows embedded operating systems. It was initially discovered by virus Blocada engineer Sergey Ulasen, and called Tootkit.Tmphider. Symantec originally called it W32.Temphid and then changed the name to W32.Stuxnet based on a mashup of stub and mrxnet.sys from the source code. The malware was signed and targeted a bug in the operating system to install a root kit. Sergey reported the bug to Microsoft and went public with the discovery. This led us into an era of cyber warfare as a the first widespread attack hitting industrial control systems. Stuxnet wasn’t your run of the mill ddos attack. Each of the 3 variants from 2010 had 150,000 lines of code and targeted those control systems and destroyed a third of Iranian centrifuges by causing the step-7 software systems to handle the centrifuges improperly. Iranian nuclear engineers had obtained the Step-7 software even though it was embargoed and used a back door password to change the rotation speed of engines that targeted a specific uranium enrichment facility. In 2011, Gary Samore, acting White House Coordinator for Arms Control and Weapons of Mass Destruction, would all but admit the attack was state sponsored. After that, in 2012, Iranian hackers use wiper malware, destroying 35,000 computers of Saudi Aramco costing the organization tens of millions of dollars. Cypem was hit in 2018. And the Sands casino after Sheldon Adelsyon said the US should nuke Iran. While not an official response, Stuxnet would hit another plant in the Hormozgon province a few months later. And continues in some form today. Since Iran and Israel are such good friends, it likely came as a shock when Gabi Ashkenazi, head of the Israeli Defense Forces, listed Stuxnet as one of his successes. And so the age of State sponsored Asymmetric cyber conflicts was born. Iran, North Korea, and others were suddenly able to punch above their weight. It was proven that what began in cyber could have real-world consequences. And very small and skilled teams could get as much done as larger, more beaurocratic organizations - much as we see small, targeted teams of developers able to compete head-on with larger software products. Why is that? Because often times, a couple of engineers with deep domain knowledge are equally as impactful as larger teams with a wider skill set.


The Brief History Of The Battery

     3/9/2020

Most computers today have multiple batteries. Going way, way, back, most had a CMOS or BIOS battery used to run the clock and keep BIOS configurations when the computer was powered down. These have mostly centered around the CR2032 lithium button cell battery, also common in things like garage door openers and many of my kids toys!

 

Given the transition to laptops for a lot of people now that families, schools, and companies mostly deploy one computer per person, there’s a larger battery in a good percentage of machines made. Laptops mostly use lithium ion batteries, which 

 

The oldest known batteries are “Baghdad batteries”, dating back to about 200BC. They could have been used for a number of things, like electroplating. But it would take 2,000 years to get back to it. As is often the case, things we knew as humans, once backed up with science, became much, much more. First, scientists were studying positive and negative elements and forming an understanding that electricity flowed between them. Like the English natural scientist, William Gilbert  - who first established some of the basics of electricity and magnetism. And Sir Thomas Browne, who continued to refine theories and was the first to call it “electricity.” Then another British scientist, Peter Collinson, sent Franklin an electricity tube, which these previous experiments had begun to produce. 

 

Benjamin Franklin spent some time writing back and forth with Collinson and flew a kite and proved that electrical currents flowed through a kite string and that a metal key was used to conduct that electricity. This proved that electricity was fluid. Linked capacitors came along in 1749. That was 1752 and Thomas-Francois Dalibard also proved the hypothesis using a large metal pole struck by lightning. 

 

Budding scientists continued to study electricity and refine the theories. 1799, Alessandro Volta built a battery by alternating zinc, cloth soaked in brine, and silver and stacking them. This was known as a voltaic pile and would release a steady current. The batteries corroded fast but today we still refer to the resistance of an ohm when the current of an amp flows through it as a volt. Suddenly we were creating electricity from an electrochemical reaction. 

 

People continued to experiment with batteries and electricity in general. Giuseppe Zamboni, another Italian, physicist invented the Zamboni pile in 1812. Here, he switched to zinc foil and manganese oxide. Completely unconnected, Swedish chemist Johann August Arfvedson discovered Lithium in 1817. Lithium. Atomic number 3. Lithium is an alkali metal found all over the world. It can be used to treat manic depression and bipolar disorder. And it powers todays modern smart-everything and Internet of thingsy world. But no one knew that yet. 

 

The English chemist John Frederick Daniell invented the Daniell cell in 1836, building on the concept but using a copper plate in a copper sulfate solution in a plate and hanging a zinc plate in the jar or beaker. Each plate had a wire and the zinc plate would become a negative terminal, while the copper plate would be a positive terminal and suddenly we were able to reliably produce electricity. 

 

Robert Anderson would build the first electric car using a battery at around the same time, but Gaston Plante would build the first rechargeable battery in 1859, which is very much resembles the ones in our cars today. He gave us the lead-acid battery, switching to lead oxide in sulfuric acid. 

 

In the 1860s the Daniell cell would be improved by Callaud and a lot of different experiments continued on. The Gassner dry cell came from Germany in 1886, mixing ammonium chloride with plaster of Paris and adding zinc chloride. Shelf life shot up. The National Carbon Company would swap out the plaster of Paris with coiled cardboard. That Colombia Dry Cell would be commercially sold throughout the United States and National Carbon Company, which would become Eveready, who makes the Energizer batteries that power the weird bunny with the drum. 

 

Swedish scientist Jungner would give us nickel-cadmium or NiCd in 1899, but they were a bit too leaky. So Thomas Edison would patent a new model in 1901, iterations of these are pretty much common through to today. Litum would start being used shortly after by GN Lewis but would not become standard until the 1970s when push button cells started to be put in cameras. Asahi Chemical out of Japan would then give us the Lithium Ion battery in 1985, brought to market by Sony in 1991, leading to  John B. Goodenough, M. Stanley Whittingham, and Akira Yoshino winning the Nobel Prize in Chemistry in 2019. 

 

Those lithium ion batteries are used in most computers and smart phones today. The Osborne 1 came in 1981. It was what we now look back on as luggable computer. A 25 pound computer that could be taken on the road. But you plugged it directly into the wall. But the Epson HX-20 would ship the same year, with a battery, opening the door to batteries powering computers. 

 

Solar cells and other larger batteries require much larger amounts. This causes an exponential increase in demand and thus a jump in the price, making it more lucrative to mine. 

 

Mining lithium to create these batteries is, as with all other large scale operations taken on by humans, destroying entire ecosystems, such as those in Argentina, Bolivia, Chile, and the Tibetan plateau. Each ton of lithium takes half a million gallons of water, another resource that’s becoming more precious. And the waste is usually filtered back into the ecosystem. Most other areas mine lithium out of rock using traditional methods, but there’s certainly still an environmental impact. There are similar impacts to mining Cobalt and Nickel, the other two metals used in most batteries. 

 

So I think we’re glad we have batteries. Thank you to all these pioneers who brought us to the point that we have batteries in pretty much everything. And thank you, listeners, for sticking through to the end of this episode of the History of Computing Podcast. We’re lucky to have you. 


Dungeons && Dragons

     12/27/2019

What does insurance, J.R.R. Tolkien, HG Wells, and the Civil War have in common? They created a perfect storm for the advent of Dungeons and Dragons. Sure, D&D might not be directly impactful on the History of Computing. But it’s impacts are far and wide. The mechanics have inspired many a game. And the culture impact can be seen expansively across the computer gaming universe. D&D came of age during the same timeframe that the original PC hackers were bringing their computers to market. But how did it all start? We’ll leave the history of board games to the side, given that Chess sprang up in northern India over 1500 years ago, spreading first to the Persian empire and then to Spain following the Moorish conquest of that country. And given that card games go back to a time before the Tang Dynasty in 9th century China. And Gary Gygax, the co-creator and creative genius behind D&D loved playing chess, going back to playing with his grandfather as a young boy. Instead, we’ll start this journey in 1780 with Johann Christian Ludwig Hellwig, who invented the first true war-game to teach military strategy. It was good enough to go commercial. Then Georg Julis Venturini made a game in 1796, then Opiz in 1806, then Kriegsspiel in 1824, which translates from German to wargame. And thus the industry was born. There were a few dozen other board games but in 1913, Little Wars, by HG Wells, added hollow lead figures, ornately painted, and distance to bring us into the era of miniature wargaming. Infantry moved a foot, cavalry moved two, and artillery required other troops to be around it. You fought with spring loaded cannons and other combat resulted in a one to one loss usually, making the game about trying to knock troops out while they were setting up their cannons. It was cute, but in the years before World War II, many sensed that the release of a war game by the pacifist Wells was a sign of oncoming doom. Indeed it was. But each of these inventors had brought their own innovations to the concept. And each impacted real war, with wargaming being directly linked to the blitzkrieg. Not a lot happened in innovative new Wargames between Wells and the 1950s. Apparently the world was busy fighting real war games. But Jack Scruby started making figures in 1955 and connecting communities, writing a book called All About Wargames in 1957. Then Gettysburg was created by Charles Roberts and released by Avalon Hill, which he founded, in 1958. It was a huge success and attracted a lot of enthusiastic if not downright obsessed players. In the game, you could play the commanders of the game, like Robert E Lee, Stonewall Jackson, Meade, and many others. You had units of varying sizes and a number of factors could impact the odds of battle. The game mechanics were complex, and it sparked a whole movement of war games that slowly rose through the 60s and 70s. One of those obsessed gamers was Gary Gygax, an insurance underwriter, who started publishing articles and magazines, Gygax started a the Lake Geneva Wargames Convention in 1968, which has since moved to Indianapolis after a pitstop in Milwaukee and now brings in upwards of 30,000 attendees. Gygax collaborated with his friend Jeff Perren on a game they released in 1970 called Chainmail. Chaimail got a supplement that introduced spells, magic items, dwarves, and hobbits - which seems based on Tolkien novels, but according to Gygax was more a composite of a lot of pulp novels, including one of his favorite, the Conan series. 1970 turned out to be a rough year, as Gygax got laid off from the insurance company and had a family with a wife and 5 kids to support. That’s when he started making games as a career. At first, it didn’t pay too well, but he started making games and published Chainmail with Guidon Games which started selling a whopping 100 copies a month. At the time, they were using 6 sided dice but other numbering systems worked better. They started doing 1-10 or 1-20 random number generation by throwing poker chips in a coffee can, but then Gary found weird dice in a school supply catalog and added the crazy idea of a 20 sided dice. Now a symbol found on t-shirts and a universal calling card of table top gamers. At about the same time University of Minnesota history student, Dave Arneson met Gygax at Gencon and took Chainmail home to the Twin Cities and started improving the rules, releasing his own derivative game called Blackmoor. He came back to Gencon the next year after testing the system and he and Gygax would go on to collaborate on an updated and expanded set of rules. Gygax would codify much of what Arneson didn’t want to codify, as Arneson found lawyer balling rules to be less fun from a gameplay perspective. But Gary, the former underwriter, was a solid rule-maker and thus role-playing games were born, in a game first called The Fantasy Game. Gary wrote a 50 page instruction book, which by 1973 had evolved into a 150-page book. He shopped it to a number of game publishers, but none had a book that thick or could really grock the concept of role-playing. Especially one with concepts borrowed from across the puIn the meantime, Gygax had been writing articles and helping others with games, and doing a little cobbling on the side. Because everyone needs shoes. And so in 1973, Gygax teamed up with childhood friend Don Kaye and started Tactical Studies Rules, which would evolve into TSR, witch each investing $1,000. They released Cavaliers and Roundheads on the way to raising the capital to publish the game they were now calling… Dungeons and Dragons. The game evolved further and in 1974 they put out 1,000 copies of in a boxed set. To raise more capital they brought in Brian Blume, who invested 2,000 more dollars. Sales of that first run were great, but Kaye passed away in 1975 and Blume’s dad stepped in to buy his shares. They started Dragon magazine, opened The Dungeon Hobby Shop and started hiring people. The game continued to grow, with Advanced Dungeons & Dragons being released with a boatload of books. They entered what we now call a buying tornado and by 1980, sales were well over 8 million dollars. But in 1979 James Egbert, a Michigan State Student, disappeared. A private eye blamed Dungeons and Dragons. He later popped up in Louisiana but the negative publicity had already started. Another teen, Irving Pulling committed suicide in 1982 and his mom blamed D&D and then started a group called Bothered About Dungeons and Dragons, or BADD. There’s no such thing as bad publicity though and sales hit $30 million by 83. In fact, part of the allure for many, including the crew I played with as a kid, was that it got a bad wrap in some ways… At this point Gary was in Hollywood getting cartoons made of Dungeons and Dragons and letting the Blume’s run the company. But they’d overspent and nearing bankruptcy due to stupid spending, Gygax had to return to Lake Geneva to save the company, which he did by releasing the first book in a long time, one of my favorite D&D books, Unearthed Arcana. Much drama running the company ensued, which isn’t pertinent to the connection D&D has to computing but basically Gary got forced out and the company lost touch with players because it was being run by people who didn’t really like gamers or gaming. 2nd edition D&D wasn’t a huge success But in 1996, Wizards of the Coast bought TSR. They had made a bundle off of Magic The Gathering and now that TSR was in the hands of people who loved games and gamers again, they immediately started looking for ways to reinvigorate the brand - which their leadership had loved. 3rd edition open gaming license was published by Wizards of the Coast and allowed third-part publishers to make material compatible with D&D products using what was known as the d20 System Trademark License. Fourth edition came along and in 2008 but that open gaming License was irrevocable so most continued using it over the new Game System License, which had been more restrictive. By 2016 when 5th edition came along, this is all felt similar to what we’ve seen with Apache, BSD, and MIT licenses, with TSR moving back to the Open Gaming License which had been so popular. Now let’s connect Dungeons and Dragons to the impact on Computing. In 1975, Will Crowther was working at Bolt, Beranek, and Newman. He’d been playing some of those early copies of Dungeons and Dragons and working on natural language processing. The two went together like peanut butter and chocolate and out popped something that tasted a little like each, a game called Colossal Cave Adventure. If you played Dungeons and Dragons, you’ll remember drawing countless maps on graph paper. Adventure was like that and loosely followed Kentucky’s Mammoth Cave system, given that Crowther was an avid caver. It ran on a PDP-10, and as those spread, so spread the fantasy game, getting updated by Stanford grad student Don Woods in 1976. Now, virtual words weren’t just on table tops, but they sprouted up in Rogue and by the time I got to college, there were countless MUDs or Multi-User Dungeons where you could kill other players. Mattel shipped the Dungeons & Dragons Computer Fantasy Game in 1981 then Dungeon! For the Apple II and another dozen or so games over the the years. These didn’t directly reflect the game mechanics of D&D though. But Pool of Raidance, set in the Forgotten Realms campaign setting of D&D popped up for Nintentendo and PCs in 1988, with dozens of D&D games shipping across a number of campaign settings. You didn’t have to have your friends over to play D&D any more. Out of that evolved Massive Multiplayer Online RPGs, including EverQuest, Ultima Online, Second Life, Dungeons and Dragons, Dark Age of Camelot, Runescape, and more. Even more closely aligned with the Dungeons and Dragons game mechanics you also got Matrix online, Star Wars Old Republic, Age of Conan and the list goes on. Now, in the meantime, Wizardy had shipped in 1981, Dragon Warrior shipped in 1986, and the Legend of Zelda had shipped in 1986 as well. And these represented an evolution on a simpler set of rules but using the same concepts. Dragon Warrior had started as Dragon Quest after the creators played Wizardy for the first time. These are only a fraction of the games that used the broad concepts of hit points, damage, probability of attack, including practically every first person shooter ever made, linking nearly every video game created that includes combat, to Dungeons and Dragons if not through direct inspiration, through aspects of game mechanics. Dungeons and Dragons also impacted media, appearing in movies like Mazes and Monsters, an almost comedic look at playing the game, ET, where I think I first encountered the game, reinvigorating Steven Jackson to release nearly the full pantheon of important Tolkien works, Krull, The Dark Crystal, The Princess Bride, Pathfinder, Excalibur, Camelot, and even The Last Witch Hunter, based off a Vin Diesel character he had separation anxiety with. The genre unlocked the limitations placed on the creativity by allowing a nearly unlimited personalization of characters. It has touched every genre of fiction and non-fiction. And the game mechanics are used not only for D&D but derivatives are also used for a variety of other industries. The impact Dungeons and Dragons had on geek culture stretches far and wide. The fact that D&D rose to popularity as many felt the geeks were taking over, with the rise of computing in general and the reinvention of entire economies, certainly connects it to so many aspects of our lives, whether realized or not. So next time you pick up that controller and hit someone in a game to do a few points of damage, next time you sit in a fantasy movie, next time you watch Game of Thrones, think about this. Once upon a time, there was a game called Chainmail. And someone came up with slightly better game mechanics. And that collaboration led to D&D. Now it is our duty to further innovate those mechanics in our own way. Innovation isn’t replacing manual human actions with digital actions in a business process, it’s upending the business process or industry with a whole new model. Yet, the business process usually needs to be automated to free us to rethink the model. Just like the creators of D&D did. If an insurance underwriter can have such an outsized impact on the world in the 1970s, what kind of impact could you be having today. Roll a d20 and find out! If you roll a 1, repeat the episode. Either way, have a great day, we’re lucky you decided to listen in!


The History Of Python

     7/6/2020

Haarlem, 1956. No, this isn’t an episode about New York, we’re talking Haarlem, Netherlands. Guido Van Rossum is born then, and goes on to college in Amsterdam where he gets a degree in math and computer science. He went on to work at the Centrum Wiskunde & Informatica, or CWI. Here, he worked on BSD Unix and the ABC Programming language, which had been written by Lambert Meertens, Leo Geurts, and Steven Pemberton from CWI. 

He’d worked on ABC for a few years through the 1980s and started to realize some issues. It had initially been a monolithic implementation, which made it hard to implement certain new features, like being able to access file systems and functions within operating systems. But Meertens was an editor of the ALGOL 68 Report and so ABC did have a lot of the ALGOL 68 influences that are prevalent in a number of more modern languages and could compile for a number of operating systems. It was a great way to spend your 20s if you’re Guido.

But after some time building interpreters and operating systems, many programmers think they have some ideas for what they might do if they just… started over. Especially when they hit their 30s. And so as we turned the corner towards the increasingly big hair of the 1990s, Guido started a new hobby project over the holiday break for Christmas 1989. 

He had been thinking of a new scripting language, loosely based on ABC. One that Unix and C programmers would be interested in, but maybe not as cumbersome as C had become. So he got to work on an interpreter. One that those open source type hackers might be interested in. ALGOL had been great for math, but we needed so much more flexibility in the 90s, unlike bangs. Bangs just needed Aquanet.

He named his new creation Python because he loved Monty Python’s Flying Circus. They had a great TV show from 1969 to 1974, and a string of movies in the 70s and early 80s. They’ve been popular amongst people in IT since I got into IT.

Python is a funny language. It’s incredibly dynamic. Like bash or a shell, we can fire it up, define a variable and echo that out on the fly. But it can also be procedural, object-oriented, or functional. And it has a standard library but is extensible so you can add libraries to do tons of new things that wouldn’t make sense to be built in (and so bloat and slow down) other apps. For example, need to get started with big array processing for machine learning projects? Install TensorFlow or Numpy. Or according to your machine learning needs you have PyTorch, SciPi, Pandas, and the list goes on. 

In 1994, 20 developers met at the US National Standards Bureau in Maryland, at the first workshop and the first Python evangelists were minted. It was obvious pretty quickly that the modular nature and ease of scripting, but with an ability to do incredibly complicated tasks, was something special. What was drawing this community in. Well, let’s start with the philosophy, the Zen of Python as Tim Peters wrote it in 1999:

  • Beautiful is better than ugly.
  • Explicit is better than implicit.
  • Simple is better than complex.
  • Complex is better than complicated.
  • Flat is better than nested.
  • Sparse is better than dense.
  • Readability counts.
  • Special cases aren't special enough to break the rules.
  • Although practicality beats purity.
  • Errors should never pass silently.
  • Unless explicitly silenced.
  • In the face of ambiguity, refuse the temptation to guess.
  • There should be one—and preferably only one—obvious way to do it.
  • Although that way may not be obvious at first unless you're Dutch.
  • Now is better than never.
  • Although never is often better than right now.[a]
  • If the implementation is hard to explain, it's a bad idea.
  • If the implementation is easy to explain, it may be a good idea.
  • Namespaces are one honking great idea—let's do more of those!

Those are important enough to be semi-official and can be found by entering “import this” into a python shell. Another reason python became important is that it’s multi-paradigm. When I said it could be kinda’ functional. Sure. Use one big old function for everything if you’re moving from COBOL and just don’t wanna’ rethink the world. Or be overly object-oriented when you move from Java and build 800 functions to echo hello world in 800 ways. Wanna map reduce your lisp code. Bring it. Or add an extension and program in paradigms I’ve never heard of. The number of libraries and other ways to extend python out there is pretty much infinite. 

And that extensibility was the opposite of ABC and why Python is special. This isn’t to take anything away from the syntax. It’s meant to be and is an easily readable language. It’s very Dutch, with not a lot of frills like that. It uses white space much as the Dutch use silence. I wish it could stare at me like I was an idiot the way the Dutch often do. But alas, it doesn’t have eyeballs. Wait, I think there’s a library for that. 

So what I meant by white space instead of punctuation is that it uses an indent instead of a curly bracket or keyword to delimit blocks of code. Increase the tabbing and you move to a new block. Many programmers do this in other languages just for readability. Python does it for code. 

Basic statements included, which match or are similar to most languages, include if, for, while, try, raise, except, class, def, with, break, continue, pass, assert, yield, import and print until python 3 when that became a function. It’s amazing what you can build with just a dozen and a half statements in programming. You can have more, but interpreters get slower and compilers get bigger and all that… 

Python also has all the expressions you’d expect in a modern language, especial lambdas. And methods. And duck typing, or suitability for a method is determined by the properties of an object rather than the type. This can be great. Or a total pain. Which is why they’ll eventually be moving to gradual typing. 

The types of objects are bool, byte array, bytes, complex, dict, ellipsis (which I overuse), float, frozen set, int, list, NoneType (which I try to never use), NotImplementedType, range, set, str, and tuple so you can pop mixed tapes into a given object. Not to be confused with a thruple, but not to not be confused I guess… 

Another draw of python was the cross-compiler concept. An early decision was to make python cable to talk to c. This won over the Unix and growing Linux crowds. And today we have cross-compilers for C and C++, Go, .Net, Java, R, machine code, and of course, Java.  

Python 2 came in 2000. We got a garbage collection system and a few other features and 7 point releases over the next 10 years. Python 3 came in 2008 and represented a big change. It was partially backward-compatible but was the first Python release that wasn’t fully backward-compatible. We have had 7 point releases in the past 10 years as well. 3 brought changes to function print, simpler syntax, moved to storing strings in unicode by default, added a range function, changed how global variables react inside for-loops, implemented a simpler set of rules for order comparisons, and much more. 

At this point developers were experimenting with deploying microservices. Microservices is an a software development architecture where we build small services, perhaps just a script or a few scripts daisy chained together, that do small tasks. These are then more highly maintainable, more easily testable, often more scalable, can be edited and deployed independently, can be structured around capabilities, and each of the services can be owned by the team that created it with a contract to ensure we don’t screw over other teams as we edit them. 

Amazon introduced AWS Lambda in 2014 and it became clear quickly that the new micro services paradigm was accelerating the move of many SaaS-based tools to a micro services architecture. Now, teams could build in node or python or java or ruby or c# or heaven forbid Go. They could quickly stand up a small service and get teams able to consume the back end service in a way that is scalable and doesn’t require standing up a server or even a virtual server, which is how we did things in EC2. The containerization concept is nothing new. We had chroot in 1979 with Unix v7 and Solaris brought us containerization in 2004. But those were more about security. Docker had shown up in 2013 and the idea of spinning up a container to run a script and give it its own library and lib container, that was special. And Amazon made it more so. 

Again, libraries and modularization. And the modular nature is key for me. Let’s say you need to do image processing. Pillow makes it easier to work with images of almost any image type you can think of. For example, it can display an image, convert it into different types, automatically generate thumbnails, run sooth, blur, contour, and even increase the detail. Libraries like that take a lot of the friction out of learning to display and manage images. 

But Python can also create its own imagery. For example, Matplotlib generates two dimensional graphs and plots points on them. These can look as good as you want them to look and actually allows us to integrate with a ton of other systems. 

Van Rossum’s career wasn’t all python though. He would go on to work at NIST then CNRI and Zope before ending up at Google in 2005, where he created Mondrian, a code review system. He would go to Dropbox in 2013 and retire from professional life in 2019. He stepped down as the “Benevolent dictator for life” of the Python project in 2018 and sat on the Python Steering Council for a term but is no longer involved. It’s been one of the most intriguing “Transfers of power” I’ve seen but Python is in great hands to thrive in the future. This is the point when Python 2 was officially discontinued, and Python 3.5.x was thriving. 

By thriving, as of mid-202, there are over 200,000 packages in the Python Package Index. Things from web frameworks and web scraping to automation, to graphical user interfaces, documentation, databases, analytics, networking, systems administrations, science, mobile, image management and processing. If you can think of it, there’s probably a package to help you do it. And it’s one of the easier languages. 

Here’s the thing. Python grew because of how flexible and easy it is to use. It didn’t have the same amount of baggage as other languages. And that flexibility and modular nature made it great for workloads in a changing and more micro-service oriented world. Or, did it help make the world more micro-service oriented. It was a Christmas hobby project that has now ballooned into one of the most popular languages to write software in the word. You know what I did over my last holiday break? Sleep. I clearly should have watched more Monty Python so the short skits could embolden me to write a language perfect for making the programmers equivalent, smaller, more modular scripts and functions. So as we turn the corner into all the holidays in front of us, consider this while stuck at home, what hobby project can we propel forward and hopefully end up with the same type of impact Guido had. A true revolutionary in his own right. 

So thank you to everyone involved in python and everyone that’s contributed to those 200k+ projects. And thank you, listeners, for continuing to tun in to the history of computing podcast. We are so lucky to have you.


From The Palm Pilot To The Treo

     4/3/2020

Today we’re going to look at the history of the Palm. 

It might be hard to remember at this point, but once upon a time, we didn’t all have mobile devices connected to the Internet. There was no Facebook and Grubhub. But in the 80s, computer scientists were starting to think about what ubiquitous computing would look like. We got the Psion and the HP Jaguar (which ran on DOS). But these seemed much more like really small laptops. And with tiny keyboards. 

General Magic spun out of Apple in 1990 but missed the mark. Other devices were continuing to hit the market, some running PenPoint from Go Corporation - but none really worked out. But former Intel, GRiD, and then Tandy employee Jeff Hawkins envisioned a personal digital assistant and created Palm Computing to create one in 1992. He had been interested in pen-based computing and worked with pattern recognition for handwriting at UC Berkeley. He asked Ed Colligan of Radius and Donna Dubinsky of Claris to join him. She would become CEO.

They worked with Casio and Tandy to release the Casio Zoomer in 1993. The Apple Newton came along in 1993 and partially due to processor speed and partially due to just immaturity in the market, both devices failed to resonate with the market. The Newton did better, but the General Magic ideas that had caught the imagination of the world were alive and well. HP Jaguars were using Palm’s synchronization software and so they were able to stay afloat. 

And so Hawkins got to work on new character recognition software. He got a tour of Xerox PARC, as did everyone else in computing and they saw Unistrokes, which had been developed by David Goldberg. Unistrokes resembled shorthand and required users to learn a new way of writing but proved much more effective. Hawkins went on to build Graffiti, based on that same concept and as Xerox patented the technology they would go into legal battles until Palm eventually settled for $22.5 million. 

More devices were coming every year and by 1995 Palm Computing was getting close to releasing a device. They had about $3 million dollars to play with. They would produce a device that had less buttons and so a larger screen size than other devices. It had the best handwriting technology on the market. It was the perfect size. Which Hawkins had made sure of by carrying around a block of wood in his pocket and to meetings to test it. Only problem is that they ran out of cash during the R&D and couldn’t take it to market. But they knew they hit the mark. 

The industry had been planning for a pen-based computing device for some time and US Robotics saw an opening. Palm ended up selling to US Robotics, who had made a bundle selling modems, for $44 million dollars. And they got folded into another acquisition, 3Com, which had been built by Bob Metcalfe, who co-invented Ethernet. US Robotics banked on Ethernet being the next wave. And they were right. But they also banked on pen computing. And were right again!

US Robotics launched the Palm Pilot 1000 with 128k of RAM and the Palm Pilot 5000 with 518k of RAM in 1996. This was the first device that actually hit the mark. People became obsessed with Graffiti. You connected it to the computer using a serial port to synchronize Notes, Contacts, and Calendars. It seems like such a small thing now, but it was huge then. They were an instant success. Everyone in computing knew something would come along, but they didn’t realize this was it. Until it was! HP, Ericsson, Sharp, NEC, Casio, Compaq, and Philips would all release handhelds but the Palm was the thing. 

By 1998 the three founders were done getting moved around and left, creating a new company to make a similar device, called Handspring. Apple continued to flounder in the space releasing the eMate and then the MessagePad. But the Handspring devices were eerily similar to the Palms. Both would get infrared, USB, and the Handspring Visor would even run Palm OS 3. But the founders had a vision for something more.

They would take Handspring public in 2000. 3Com would take Palm public in 2000. Only problem is the dot com bubble. Well, that and Research in Notion began to ship the Blackberry OS in 1999 and the next wave of devices began to chip away at the market share. Shares dropped over 90% and by 2002 Palm had to set up a subsidiary for the Palm OS.

But again, the crew at Handspring had something more in mind. They released the Tree in 2002. The Handspring Treo was, check this out, a smart phone. It could do email, SMS, voice calls. Over the years they would add a camera, GPS, MP3, and Wi-Fi. Basically what we all expect from a smartphone today. 

Handspring merged with Palm in 2003 and they released the Palm Tree 600. They merged back the company the OS had been spun out into, finally all merged back together in 2005. Meanwhile, Pilot pens had sued Palm and the devices were then just called Palm. We got a few, with the Palm V probably being the best, got a few new features, lots and lots of syncing problems, when new sync tools were added. 

Now that all of the parts of the company were back together, they started planning for a new OS, which they announced in 2009. And webOS was supposed to be huge. And they announced the Palm Pre, the killer next Smartphone. 

The only problem is that the iPhone had come along in 2007. And Android was released in 2008. Palm had the right idea. They just got sideswiped by Apple and Google. 

And they ran out of money. They were bought by Hewlett-Packard in 2010 for 1.2 billion dollars. Under new management the company was again split into parts, with WebOS never really taking off, the PRe 3 never really shipping, and TouchPads not actually being any good and ultimately ending in the CEO of HP getting fired (along with other things). Once Meg Whitman stepped in as CEO, WebOS was open sourced and the remaining assets sold off to LG Electronics to be used in Smart TVs. 

The Palm Pilot was the first successful handheld device. It gave us permission to think about more. The iPod came along in 2001, in a red ocean of crappy MP3 handheld devices. And over time it would get some of the features of the Palm. But I can still remember the day the iPhone came out and the few dozen people I knew with Treos cursing because they knew it was time to replace it. In the meantime Windows CE and other mobile operating systems had just pilfered market share away from Palm slowly. The founders invented something people truly loved. For awhile. And they had the right vision for the next thing that people would love. They just couldn’t keep up with the swell that would become the iPhone and Android, which now own pretty much the entire market. 

And so Palm is no more. But they certainly left a dent in the universe. And we owe them our thanks for that. Just as I owe you my thanks for tuning in to this episode of the history of computing podcast. We are so lucky to decided to listen in - you’re welcome back any time! Have a great day!


The Evolution (and De-Evolution) of the Mac Server

     2/28/2020

Todays episode is on one of the topics I am probably the most intimate with that we’ll cover: the evolution of the Apple servers and then the rapid pivot towards a much more mobility-focused offering. Early Macs in 1984 shipped with AppleTalk. These could act as a server or workstation. But after a few years, engineers realized that Apple needed a dedicated server platform. Apple has had a server product starting in 1987 that lives on to today. At Ease had some file and print sharing options. But the old AppleShare (later called AppleShare IP server was primarily used to provide network resources to the Mac from 1986 to 2000, with file sharing being the main service offered. There were basically two options. At Ease, which ran on the early Mac operating systems and A/UX, or Apple Unix. This brought paged memory management and could run on the Macintosh II through the Centris Macs. Apple Unix shipped from 1988 to 1995 and had been based on System V. It was a solidly performing TCP/IP machine and introduced the world of POSIX. Apple Unix could emulate Mac apps and once you were under the hood, you could do pretty much anything you might do in another Unix environment. Apple also took a stab at early server hardware in the form of the Apple Network Server, which was announced in 1995 when Apple Unix went away, for the Quadra 950 and a PowerPC server sold from 1996 to 1997, although the name was used all the way until 2003. While these things were much more powerful and came with modern hardware, they didn’t run the Mac OS but ran another Unix type of operating system, AIX, which had begun life at about the same time as Apple Unix and was another System V variant, but had much more work done and given financial issues at Apple and the Taligent relationship between Apple and IBM to build a successor to Mac OS and OS/2, it made sense to work together on the project. Meanwhile, At Ease continued to evolve and Apple eventually shipped a new offering in the form of AppleShare IP, which worked up until 9.2.2. In an era before, as an example, you needed to require SMTP authentication, AppleShare IP was easily used for everything from file sharing services to mail services. An older Quadra made for a great mail server so your company could stop paying an ISP for some weird email address like that AOL address you got in college, and get your own domain in 1999! And if you needed more, you could easily slap some third party software on the hosts, like if you actually wanted SMTP authentication so your server didn’t get used to route this weird thing called spam, you could install Communigator or later Communigate Pro. Keep in mind that many of the engineers from NeXT after Steve Jobs left Apple had remained friends with engineers from Apple. Some still actually work at Apple. Serving services was a central need for NEXTSTEP and OPENSTEP systems. The UNIX underpinnings made it possible to compile a number of open source software packages and the first web server was hosted by Tim Berners Lee on a NeXTcube. During the transition over to Apple, AppleShare IP and services from NeXT were made to look and feel similarly and turned into Rhapsody from around 1999 and then Mac OS X Server from around 2000. The first few releases of Mac OS X Server, represented a learning curve for many classic Apple admins, and in fact caused a generational shift in who administered the systems. John Welch wrote books in 2000 and 2002 that helped administrators get up to speed. The Xserve was released in 2002 and the Xserve RAID was released in 2003. It took time, but a community began to form around these products. The Xserve would go from a G3 to a G4. The late Michael Bartosh compiled a seminal work in “Essential Mac OS X Panther Server Administration” for O’Reilly Media in 2005. I released my first book called The Mac Tiger Server Black Book in 2006. The server was enjoying a huge upswing in use. Schoun Regan and Kevin White wrote a Visual QuickStart for Panther Server. Schoun wrote one for Tiger Server. The platform was growing. People were interested. Small businesses, schools, universities, art departments in bigger companies. The Xserve would go from a G4 to an Intel processor and we would get cluster nodes to offload processing power from more expensive servers. Up until this point, Apple never publicly acknowledged that businesses or enterprises used their device so the rise of the Xserve advertising was the first time we saw that acknowledgement. Apple continued to improve the product with new services up until 2009 with Mac OS X Server 10.6. At this point, Apple included most services necessary for running a standard IT department for small and medium sized business in the product, including web (in the form of Apache), mail, groupware, DHCP, DNS, directory services, file sharing, and even web and wiki services. There were also edge case services such as Podcast Producer for automating video and content workflows, Xsan, a clustered file system, and in 2009 even purchased a company called Artbox, whose product was rebranded as Final Cut Server. Apple now had multiple awesome, stable products. Dozens of books and websites were helping built a community and growing knowledge of the platform. But that was a turning point. Around that same time Apple had been working towards the iPad, released in 2010 (although arguably the Knowledge Navigator was the first iteration, conceptualized in 1987). The skyrocketing sales of the iPhone led to some tough decisions. Apple no longer needed to control the whole ecosystem with their server product and instead began transitioning as many teams as possible to work on higher profit margin areas, reducing focus on areas that took attention away from valuable software developers who were trying to solve problems many other vendors had already solved better. In 2009 the Xserve RAID was discontinued and the Xserve went away the following year. By then, the Xserve RAID was lagging and for the use cases it served, there were other vendors whose sole focus was storage - and who Apple actively helped point customers towards. Namely the Promise array for Xsan. A few things that were happening around the same time. Apple could have bought Sun for less than 10% of their CASH reserves in 2010 but instead allowed Oracle to buy the tech giant. Instead, Apple released the iPad. Solid move. They also released the Mac Mini server, which while it lacked rack and stack options like an ipmi interface to remotely reboot the server and dual power supplies, was actually more powerful. The next few years saw services slowly pealed off the server. Today, the Mac OS X Server product has been migrated to just an app on the App Store. Today, macOS Server is meant to run Profile Manager and be run as a metadata controller for Xsan, Apple’s clustered file system. Products that used to compete with the platform are now embraced by most in the community. For the most part, this is because Apple let Microsoft or Linux-based systems own the market for providing features that are often unique to each enterprise and not about delighting end users. Today building server products that try to do everything for everyone seems like a distant memory for many at Apple. But there is still a keen eye towards making the lives of the humans that use Apple devices better, as has been the case since Steve Jobs mainstreamed the GUI and Apple made the great user experience advocate Larry Tesler their Chief Scientist. How services make a better experience for end users can be seen by the Caching service built into macOS (moved there from macOS Server) and how some products, such as Apple Remote Desktop, are still very much alive and kicking. But the focus on profile management and the desire to open up everything Profile Manager can do to third party developers who serve often niche markets or look more to scalability is certainly front and center. I think this story of the Apple Server offering is really much more about Apple branching into awesome areas that they needed to be at various points in time. Then having a constant focus on iterating to a better, newer offering. Growing with the market. Helping the market get to where they needed them to be. Serving the market and then when the needs of the market can be better served elsewhere, pulling back so other vendors could serve the market. Not looking to grow a billion dollar business unit in servers - but instead looking to provide them just until they didn’t need to. In many ways Apple paved the way for billion dollar businesses to host services. And the SaaS ecosystem is as vibrant for the Apple platform as ever. My perspective on this has changed a lot over the years. As someone who wrote a lot of books about the topic I might have been harsh at times. But that’s one great reason not to be judgmental. You don’t always know the full picture and it’s super-easy to miss big strategies like that when you’re in the middle of it. So thank you to Apple for putting user experience into servers as with everything you do. And thank you listeners for tuning into this episode of the History of Computing Podcast. We’re certainly lucky to have you and hope you join us next time!


The History Of The Computer Modem

     4/1/2020

Today we’re going to look at the history of the dial-up computer modem. 

Modem stands for modulate/demodulate. That modulation is carying a property (like voice or computer bits) over a waveform.  Modems originally encoded voice data with frequency shift keys, but that was developed during World War II. The voices were encoded into digital tones. That system was called SIGSALY. But they called them vocoders at the time. 

They matured over the next 17 years. And then came the SAGE air defense system in 1958. Here, the modem was employed to connect bases, missile silos, and radars back to the central SAGE system. These were Bell 101 modems and ran at an amazing 110 baud. Bell Labs, as in AT&T.  

A baud is a unit of transmission that is equal to how many times a signal changes state per second. Each of those baud is equivalent to one bit per second. So that first modem was able to process data at 110 bits per second. This isn’t to say that baud is the same as bitrate. Early on it seemed to be but the algorithms sku the higher the numbers. 

So AT&T had developed the modem and after a few years they began to see commercial uses for it. So in 1962, they revved that 101 to become the Bell 103. Actually, 103A. This thing used newer technology and better encoding, so could run at 300 bits per second. Suddenly teletypes - or terminals, could connect to computers remotely. But ma’ Bell kept a tight leash on how they were used for those first few years. That, until 1968.

In 1968 came what is known as the Carterphone Decision. We owe a lot to the Carterfone. It bridged radio systems to telephone systems. And Ma Bell had been controlling what lives on their lines for a long time. The decision opened up what devices could be plugged into the phone system. And suddenly new innovations like fax machines and answering machines showed up in the world. 

And so in 1968, any device with an acoustic coupler could be hooked up to the phone system. And that Bell 103A would lead to others. By 1972, Stanford Research had spun out a device, Novation, and others. But the Vladic added full duplex and got speeds four times what the 103A worked at by employing duplexing and new frequencies. We were up to 1200 bits per second. 

The bit rate had jumped four-fold because, well, competition. Prices dropped and by the late 1970s microcomputers were showing up in homes. There was a modem for the S-100 Altair bus, the Apple II through a Z-80 SoftCard, and even for the Commodore PET. And people wanted to talk to one another. TCP had been developed in 1974 but at this point the most common way to communicate was to dial directly into bulletin board services. 

1981 was a pivotal year. A few things happened that were not yet connected at the time. The National Science Foundation created the Computer Science Network, or CSNET, which would result in NSFNET later, and when combined with the other nets, the Internet, replacing ARPANET. 

1981 also saw the release of the Commodore VIC-20 and TRS-80. This led to more and more computers in homes and more people wanting to connect with those online services. Later models would have modems.

1981 also saw the release of the Hayes Smartmodem. This was a physical box that connected to the computer of a serial port. The Smartmodem had a controller that recognized commands. And established the Hayes command set standard that would be used to connect to phone lines, allowing you to initiate a call, dial a number, answer a call, and hang up. Without lifting a handset and placing it on a modem. On the inside it was still 300-baud but the progress and innovations were speeding up. And it didn’t seem like a huge deal. 

The online services were starting to grow. The French Minitel service was released commercially in 1982. The first BBS that would become Fidonet showed up in 1983. Various encoding techniques started to come along and by 1984 you had the Trailblazer modem, at over 18,000 bits a second. But, this was for specific uses and combined 36 bit/second channels. 

The use of email started to increase and the needs for even more speed. We got the ability to connect two USRobotics modems in the mid-80s to run at 2400 bits per second. But Gottfried Ungerboeck would publish a paper defining a theory of information coding and add parity checking at about the time we got echo suppression. This allowed us to jump to 9600 bits in the late 80s. 

All of these vendors releasing all of this resulted in the v.21 standard in 1989 from the  ITU Telecommunication Standardization Sector (ITU-T). They’re the ones that ratify a lot of standards, like x.509 or MP4. Several other v dot standards would come along as well. 

The next jump came with the SupraFaXModem with Rockwell chips, which was released in 1992. And USRobotics brought us to 16,800 bits per second but with errors. But we got v.32 in 1991 to get to 14.4 - now we were talking in kilobits! Then 19.2 in 1993, 28.8 in 1994, 33.6 in 1996. By 1999 we got the last of the major updates, v.90 which got us to 56k. At this point, most homes in the US at least had computers and were going online. 

The same year, ANSI ratified ADSL, or Asymmetric Digital Subscriber Lines. Suddenly we were communicating in the megabits. And the dial-up modem began to be used a little less and less. In 2004 Multimedia over Coax Alliance was formed and cable modems became standard. The combination of DSL and cable modems has now all but removed the need for dial up modems. Given the pervasiveness of cell phones, today, as few as 20% of homes in the US have a phone line any more. We’ve moved on.

But the journey of the dial-up modem was a key contributor to us getting from a lot of disconnected computers to… The Internet as we know it today. So thank you to everyone involved, from Ma Bell, to Rockwell, to USRobotics, to Hayes, and so on. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky to have you. Have a great day. 


Happy Birthday ENIAC

     2/15/2020

Today we’re going to celebrate the birthday of the first real multi-purpose computer: the gargantuan ENIAC which would have turned 74 years old today, on February 15th. Many generations ago in computing. The year is 1946. World War II raged from 1939 to 1945. We’d cracked Enigma with computers and scientists were thinking of more and more ways to use them. The press is now running articles about a “giant brain” built in Philadelphia. The Electronic Numerical Integrator and Computer was a mouthful, so they called it ENIAC. It was the first true electronic computer. Before that there were electromechanical monstrosities. Those had to physically move a part in order to process a mathematical formula. That took time. ENIAC used vacuum tubes instead. A lot of them. To put things in perspective: very hour of processing by the ENiAC was worth 2,400 hours of work calculating formulas by hand. And it’s not like you can do 2,400 hours in parallel between people or in a row of course. So it made the previous almost impossible, possible. Sure, you could figure out the settings to fire a bomb where you wanted two bombs to go in a minute rather than about a full day of running calculations. But math itself, for the purposes of math, was about to get really, really cool. The Bush Differential Analyzer, a later mechanical computer, had been built in the basement of the building that is now the ENIAC museum. The University of Pennsylvania ran a class on wartime electronics, based on their experience with the Differential Analyzer. John Mauchly and J. Presper Eckert met in 1941 while taking that class, a topic that had included lots of shiny new or newish things like radar and cryptanalysis. That class was mostly on ballistics, a core focus at the Moore School of Electrical Engineering at the University of Pennsylvania. More accurate ballistics would be a huge contribution to the war effort. But Echert and Mauchly wanted to go further, building a multi-purpose computer that could analyze weather and calculate ballistics. Mauchly got all fired up and wrote a memo about building a general purpose computer. But the University shot it down. And so ENIAC began life as Project PX when Herman Goldstine acted as the main sponsor after seeing their proposal and digging it back up. Mauchly would team up with Eckert to design the computer and the effort was overseen and orchestrated by Major General Gladeon Barnes of the US Army Ordnance Corps. Thomas Sharpless was the master programmer. Arthur Burkes built the multiplier. Robert Shaw designed the function tables. Harry Huskey designed the reader and the printer. Jeffrey Chu built the dividers. And Jack Davis built the accumulators. Ultimately it was just a really big calculator and not a computer that ran stored programs in the same way we do today. Although ENIAC did get an early version of stored programming that used a function table for read only memory. The project was supposed to cost $61,700. The University of Pennsylvania Department of Computer and Information Science in Philadelphia actually spent half a million dollars worth of metal, tubes and wires. And of course the scientists weren’t free. That’s around $6 and a half million worth of cash today. And of course it was paid for by the US Army. Specifically the Ballistic Research Laboratory. It was designed to calculate firing tables to make blowing things up a little more accurate. Herman Goldstine chose a team of programmers that included Betty Jennings, Betty Snyder, Kay McNulty, Fran Bilas, Marlyn Meltzer, and Ruth Lichterman. They were chosen from a pool of 200 and set about writing the necessary formulas for the machine to process the requirements provided from people using time on the machine. In fact, Kay McNulty invented the concept of subroutines while working on the project. They would flip switches and plug in cables as a means of programming the computer. And programming took weeks of figuring up complex calculations on paper. . Then it took days of fiddling with cables, switches, tubes, and panels to input the program. Debugging was done step by step, similar to how we use break points today. They would feed ENIAC input using IBM punch cards and readers. The output was punch cards as well and these punch cards acted as persistent storage. The machine then used standard octal radio tubes. 18000 tubes and they ran at a lower voltage than they could in order to minimize them blowing out and creating heat. Each digit used in calculations took 36 of those vacuum tubes and 20 accumulators that could run 5,000 operations per second. The accumulators used two of those tubes to form a flip-flop and they got them from the Kentucky Electrical Lamp Company. Given the number that blew every day they must have loved life until engineers got it to only blowing a tube every couple of days. ENIAC was modular computer and used different panels to perform different tasks, or functions. It used ring counters with 10 positions for a lot of operations making it a digital computer as opposed to the modern binary computational devices we have today. The pulses between the rings were used to count. Suddenly computers were big money. A lot of research had happened in a short amount of time. Some had been government funded and some had been part of corporations and it became impossible to untangle the two. This was pretty common with technical advances during World War II and the early Cold War years. John Atanasoff and Cliff Berry had ushered in the era of the digital computer in 1939 but hadn’t finished. Maunchly had seen that in 1941. It was used to run a number of calculations for the Manhattan Project, allowing us to blow more things up than ever. That project took over a million punch cards and took precedent over artillery tables. Jon Von Neumann worked with a number of mathematicians and physicists including Stanislaw Ulam who developed the Monte Method. That led to a massive reduction in programming time. Suddenly programming became more about I/O than anything else. To promote the emerging computing industry, the Pentagon had the Moore School of Electrical Engineering at The University of Pennsylvania launch a series of lectures to further computing at large. These were called the Theory and Techniques for Design of Electronic Digital Computers, or just the Moore School Lectures for short. The lectures focused on the various types of circuits and the findings from Eckert and Mauchly on building and architecting computers. Goldstein would talk at length about math and other developers would give talks, looking forward to the development of the EDVAC and back at how they got where they were with ENIAC. As the University began to realize the potential business impact and monetization, they decided to bring a focus to University owned patents. That drove the original designers out of the University of Pennsylvania and they started the Eckert-Mauchly Computer Corporation in 1946. Eckert-Mauchley would the build EDVAC, taking use of progress the industry had made since the ENIAC construction had begun. EDVAC would effectively represent the wholesale move away from digital and into binary computing and while it weighed tons - it would become the precursor to the microchip. After the ENIAC was finished Mauchly filed for a patent in 1947. While a patent was granted, you could still count on your fingers the number of machines that were built at about the same time, including the Atanasoff Berry Computer, Colossus, the Harvard Mark I and the Z3. So luckily the patent was avoided and digital computers are a part of the public domain. That patent was voided in 1973. By then, the Eckert-Mauchly computer corporation had been acquired by Remington Rand, which merged with Sperry and is now called Unisys. The next wave of computers would be mainframes built by GE, Honeywell, IBM, and another of other vendors and so the era of batch processing mainframes began. The EDVAC begat the UNIVAC and Grace Hopper being brought in to write an assembler for that. Computers would become the big mathematical number crunchers and slowly spread into being data processors from there. Following decades of batch processing mainframes we would get minicomputers and interactivity, then time sharing, and then the PC revolution. Distinct eras in computing. Today, computers do far more than just the types of math the ENIAC did. In fact, the functionality of ENIAC was duplicated onto a 20 megahertz microchip in 1996. You know, ‘cause the University of Pennsylvania wanted to do something to celebrate the 50th birthday. And a birthday party seemed underwhelming at the time. And so the date of release for this episode is February 15th, now ENIAC Day in Philadelphia, dedicated as a way to thank the university, creators, and programmers. And we should all reiterate their thanks. They helped put computers front and center into the thoughts of the next generation of physicists, mathematicians, and engineers, who built the mainframe era. And I should thank you - for listening to this episode. I’m pretty lucky to have ya’. Have a great day! .


The Mouse

     2/18/2020

In a world of rapidly changing technologies, few have lasted as long is as unaltered a fashion as the mouse. The party line is that the computer mouse was invente d by Douglas Engelbart in 1964 and that it was a one-button wooden device that had two metal wheels. Those used an analog to digital conversion to input a location to a computer. But there’s a lot more to tell. Englebart had read an article in 1945 called “As We May Think” by Vannevar Bush. He was in the Philippines working as a radio and radar tech. He’d return home,. Get his degree in electrical engineering, then go to Berkeley and get first his masters and then a PhD. Still in electrical engineering. At the time there were a lot of military grants in computing floating around and a Navy grant saw him work on a computer called CALDIC, short for the California Digital Computer. By the time he completed his PhD he was ready to start a computer storage company but ended up at the Stanford Research Institute in 1957. He published a paper in 1962 called Augmenting Human Intellect: A Conceptual Framework. That paper would guide the next decade of his life and help shape nearly everything in computing that came after. Keeping with the theme of “As We May Think” Englebart was all about supplementing what humans could do. The world of computer science had been interested in selecting things on a computer graphically for some time. And Englebart would have a number of devices that he wanted to test in order to find the best possible device for humans to augment their capabilities using a computer. He knew he wanted a graphical system and wanted to be deliberate about every aspect in a very academic fashion. And a key aspect was how people that used the system would interact with it. The keyboard was already a mainstay but he wanted people pointing at things on a screen. While Englebart would invent the mouse, pointing devices certainly weren’t new. Pilots had been using the joystick for some time, but an electrical joystick had been developed at the US Naval Research Laboratory in 1926, with the concept of unmanned aircraft in mind. The Germans would end up building one in 1944 as well. But it was Alan Kotok who brought the joystick to the computer game in the early 1960s to play spacewar on minicomputers. And Ralph Baer brought it into homes in 1967 for an early video game system, the Magnavox Odyssey. Another input device that had come along was the trackball. Ralph Benjamin of the British Royal Navy’s Scientific Service invented the trackball, or ball tracker for radar plotting on the Comprehensive Display System, or CDS. The computers were analog at the time but they could still use the X-Y coordinates from the trackball, which they patented in 1947. Tom Cranston, Fred Longstaff and Kenyon Taylor had seen the CDS trackball and used that as the primary input for DATAR, a radar-driven battlefield visualization computer. The trackball stayed in radar systems into the 60s, when Orbit Instrument Corporation made the X-Y Ball Tracker and then Telefunken turned it upside down to control the TR 440, making an early mouse type of device. The last of the options Englebart decided against was the light pen. Light guns had shown up in the 1930s when engineers realized that a vacuum tube was light-sensitive. You could shoot a beam of light at a tube and it could react. Robert Everett worked with Jay Forrester to develop the light pen, which would allow people to interact with a CRT using light sensing to cause an interrupt on a computer. This would move to the SAGE computer system from there and eek into the IBM mainframes in the 60s. While the technology used to track the coordinates is not even remotely similar, think of this as conceptually similar to the styluses used with tablets and on Wacom tablets today. Paul Morris Fitts had built a model in 1954, now known as Fitts’s Law, to predict the time that’s required to move things on a screen. He defined the target area as a function of the ratio between the distance to the target and the width of the target. If you listen to enough episodes of this podcast, you’ll hear a few names repeatedly. One of those is Claude Shannon. He brought a lot of the math to computing in the 40s and 50s and helped with the Shannon-Hartley Theorum, which defined information transmission rates over a given medium. So these were the main options at Englebart’s disposal to test when he started ARC. But in looking at them, he had another idea. He’d sketched out the mouse in 1961 while sitting in a conference session about computer graphics. Once he had funding he brought in Bill English to build a prototype I n 1963. The first model used two perpendicular wheels attached to potentiometers that tracked movement. It had one button to select things on a screen. It tracked x,y coordinates as had previous devices. NASA funded a study to really dig in and decide which was the best device. He, Bill English, and an extremely talented team, spent two years researching the question, publishing a report in 1965. They really had the blinders off, too. They looked at the DEC Grafacon, joysticks, light pens and even what amounts to a mouse that was knee operated. Two years of what we’d call UX research or User Research today. Few organizations would dedicate that much time to study something. But the result would be patenting the mouse in 1967, an innovation that would last for over 50 years. I’ve heard Engelbart criticized for taking so long to build the oNline System, or NLS, which he showcased at the Mother of All Demos. But it’s worth thinking of his research as academic in nature. It was government funded. And it changed the world. His paper on Computer-Aided Display Controls was seminal. Vietnam caused a lot of those government funded contracts to dry up. From there, Bill English and a number of others from Stanford Research Institute which ARC was a part of, moved to Xerox PARC. English and Jack Hawley iterated and improved the technology of the mouse, ditching the analog to digital converters and over the next few years we’d see some of the most substantial advancements in computing. By 1981, Xerox had shipped the Alto and the Star. But while Xerox would be profitable with their basic research, they would miss something that a candle-clad hippy wouldn’t. In 1979, Xerox let Steve Jobs make three trips to PARC in exchange for the opportunity to buy 100,000 shares of Apple stock pre-IPO. The mouse by then had evolved to a three button mouse that cost $300. It didn’t roll well and had to be used on pretty specific surfaces. Jobs would call Dean Hovey, a co-founder of IDEO and demand they design one that would work on anything including quote “blue jeans.” Oh, and he wanted it to cost $15. And he wanted it to have just one button, which would be an Apple hallmark for the next 30ish years. Hovey-Kelley would move to optical encoder wheels, freeing the tracking ball to move however it needed to and then use injection molded frames. And thus make the mouse affordable. It’s amazing what can happen when you combine all that user research and academic rigor from Englebarts team and engineering advancements documented at Xerox PARC with world-class industrial design. You see this trend played out over and over with the innovations in computing that are built to last. The mouse would ship with the LISA and then with the 1984 Mac. Logitech had shipped a mouse in 1982 for $300. After leaving Xerox, Jack Howley founded a company to sell a mouse for $400 the same year. Microsoft released a mouse for $200 in 1983. But Apple changed the world when Steve Jobs demanded the mouse ship with all Macs. The IBM PC would ;use a mouse and from there it would become ubiquitous in personal computing. Desktops would ship with a mouse. Laptops would have a funny little button that could be used as a mouse when the actual mouse was unavailable. The mouse would ship with extra buttons that could be mapped to additional workflows or macros. And even servers were then outfitted with switches that allowed using a device that switched the keyboard, video, and mouse between them during the rise of large server farms to run the upcoming dot com revolution. Trays would be put into most racks with a single u, or unit of the rack being used to see what you’re working on; especially after Windows or windowing servers started to ship. As various technologies matured, other innovations came along to input devices. The mouse would go optical in 1980 and ship with early Xerox Star computers but what we think of as an optical mouse wouldn’t really ship until 1999 when Microsoft released the IntelliMouse. Some of that tech came to them via Hewlett-Packard through the HP acquisition of DEC and some of those same Digital Research Institute engineers had been brought in from the original mainstreamer of the mouse, PARC when Bob Taylor started DRI. The LED sensor on the muse stuck around. And thus ended the era of the mouse pad, once a hallmark of many a marketing give-away. Finger tracking devices came along in 1969 but were far too expensive to produce at the time. As capacitive sensitive pads, or trackpads came down in price and the technology matured those began to replace the previous mouse-types of devices. The 1982 Apollo computers were the first to ship with a touchpad but it wasn’t until Synaptics launched the TouchPad in 1992 that they began to become common, showing up in 1995 on Apple laptops and then becoming ubiquitous over the coming years. In fact, the IBM Thinkpad and many others shipped laptops with little red nubs in the keyboard for people that didn’t want to use the TouchPad for awhile as well. Some advancements in the mouse didn’t work out. Apple released the hockey puck shaped mouse in 1998, when they released the iMac. It was USB, which replaced the ADB interface. USB lasted. The shape of the mouse didn’t. Apple would go to the monolithic surface mouse in 2000, go wireless in 2003 and then release the Mighty Mouse in 2005. The Mighty Mouse would have a capacitive touch sensor and since people wanted to hear a click would produce that with a little speaker. This also signified the beginning of bluetooth as a means of connecting a mouse. Laptops began to replace desktops for many, and so the mouse itself isn’t as dominant today. And with mobile and tablet computing, resistive touchscreens rose to replace many uses for the mouse. But even today, when I edit these podcasts, I often switch over to a mouse simply because other means of dragging around timelines simply aren’t as graceful. And using a pen, as Englebart’s research from the 60s indicated, simply gets fatiguing. Whether it’s always obvious, we have an underlying story we’re often trying to tell with each of these episodes. We obviously love unbridled innovation and a relentless drive towards a technologically utopian multiverse. But taking a step back during that process and researching what people want means less work and faster adoption. Doug Englebart was a lot of things but one net-new point we’d like to make is that he was possibly the most innovative in harnessing user research to make sure that his innovations would last for decades to come. Today, we’d love to research every button and heat map and track eyeballs. But remembering, as he did, that our job is to augment human intellect, is best done when we make our advances useful, helps to keep us and the forks that occur in technology from us, from having to backtrack decades of work in order to take the next jump forward. We believe in the reach of your innovations. So next time you’re working on a project. Save yourself time, save your code a little cyclomatic complexity, , and save users frustration from having to relearn a whole new thing. And research what you’re going to do first. Because you never know. Something you engineer might end up being touched by nearly every human on the planet the way the mouse has. Thank you Englebart. And thank you to NASA and Bob Roberts from ARPA for funding such important research. And thank you to Xerox PARC, for carrying the torch. And to Steve Jobs for making the mouse accessible to every day humans. As with many an advance in computing, there are a lot of people that deserve a little bit of the credit. And thank you listeners, for joining us for another episode of the history of computing podcast. We’re so lucky to have you. Now stop consuming content and go change the world.


OS/2

     2/21/2020

Today we’re going to look at an operating system from the 80s and 90s called OS/2. OS/2 was a bright shining light for a bit. IBM had a task force that wanted to build a personal computer. They’d been watching the hobbyists for some time and felt they could take off the shelf parts and build a PC. So they did.. But they needed an operating system. They reached out to Microsoft in 1980, who’d been successful with the Altair and so seemed a safe choice. By then, IBM had the IBM Entry Systems Division based out of their Boca Raton, Florida offices. The open architecture allowed them to ship fast. And it afforded them the chance to ship a computer with, check this out, options for an operating system. Wild idea, right? The options initially provided were CP/M and PC DOS, which was MS-DOS ported to the IBM open architecture. CP/M sold for $240 and PC DOS sold for $40. PC DOS had come from Microsoft’s acquisition of 86-DOS from Seattle Computer Products. The PC shipped in 1981, lightning fast for an IBM product. At the time Apple, Atari, Commodore, and were in control of the personal computer market. IBM had dominated the mainframe market for decades and once the personal computer market reached $100 million dollars in sales, it was time to go get some of that. And so the IBM PC would come to be an astounding success and make it not uncommon to see PCs on people’s desks at work or even at home. And being that most people didn’t know a difference, PC DOS would ship on most. By 1985 it was clear that Microsoft had entered and subsequently dominated the PC market. And it was clear that due to the open architecture that other vendors were starting to compete. And after 5 years of working together on PC DOS and 3 versions later, Microsoft and IBM signed a Joint Development Agreement and got to work on the next operating system. One they thought would change everything and set IBM PCs up to dominate the market for decades to come. Over that time, they’d noticed some gaps in DOS. One of the most substantial is that after the projects and files got too big, they became unwieldy. They wanted an object oriented operating system. Another is protected mode. The 286 chips from Intel had protected mode dating back to 1982 and IBM engineers felt they needed to harness that in order to get multi-tasking safely and harness virtual memory to provide better support for all these crazy new windowing things they’d learned with their GUI overlay to DOS called TOPview. So after the Joint Development agreement was signed , IBM let Ed Iacobucci lead the charge on their side and Microsoft had learned a lot from their attempts at a windowing operating system. The two organizations borrowed ideas from all the literature and Unix and of course the Mac. And really built a much better operating system than anything available at the time. Microsoft had been releasing Windows the whole time. Windows 1 came in 1985 and Windows 2 came in 1987, the same year OS/2 1.0 was released. In fact, one of the most dominant PC models to ever ship, the PS/2 computer, would ship that year as well. The initial release didn’t have a GUI. That wouldn’t come until version 1.1 nearly a year later in 1988. SNA shipped to interface with IBM mainframes in that release as well. And TCP/IP and Ethernet would come in version 1.2 in 1989. During this time, Microsoft steadily introduced new options in Windows and claimed both publicly and privately in meetings with IBM that OS/2 was the OS of the future and Windows would some day go away. They would release an extended edition that included a built-in database. Based on protected mode developers didn’t have to call the BIOS any more and could just use provided APIs. You could switch the foreground application using control-escape. In Windows that would become Alt-Tab. 1.2 brought the hpfs file system, bringing longer file names, a journaled file system to protect against data loss during crashes, and extended attributes, similar to how those worked on the Mac. But many of the features would ship in a version of Windows that would be released just a few months before. Like that GUI. Microsoft’s presentation manager came in Windows 2.1 just a few months before OS/2 1.1. Microsoft had an independent sales team. Every manufacturer that bundled Windows meant there were more drivers for Windows so a wider variety of hardware could be used. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. They started something similar to what we’d call a fork today of OS/2. And in 1988 they lured Dave Cutler from Digital who had been the architect of the VMS operating system. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler’s NT would replace all other operating systems in the family with the release of Windows 2000. But by 1990 when Microsoft released Windows 3 they sold millions of copies. Due to great OEM agreements they were on a lot of computers that people bought. The Joint Development Agreement would finally end. IBM had enough of what they assumed meant getting snowed by Microsoft. It took a couple of years for Microsoft to recover. In 1992, the war was on. Microsoft released Windows 3.1 and it was clear that they were moving ideas and people between the OS/2 and Windows teams. I mean, the operating systems actually looked a lot alike. TCP/IP finally shipped in Windows in 1992, 3 years after the companies had co-developed the feature for OS/2. But both would go 32 bit in 1992. OS /2 version 2.0 would also ship, bringing a lot of features. And both took off the blinders thinking about what the future would hold. Microsoft with Windows 95 and NT on parallel development tracks and IBM launched multiple projects to find a replacement operating system. They tried an internal project, Workstation OS, which fizzled. IBM did the unthinkable for Workplace OS. They entered into an alliance with Apple, taking on a number of Apple developers who formed what would be known as the Pink team. The Pinks moved into separate quarters and formed a new company called Taligent with Apple and IBM backing. Taligent planned to bring a new operating system to market in the mid-1990s. They would laser focus on PowerPC chips thus abandoning what was fast becoming the WinTel world. They did show Workspace OS at Comdex one year, but by then Bill Gates was all to swing by the booth knowing he’d won the battle. But they never shipped. By the mid-90s, Taligent would be rolled into IBM and focus on Java projects. Raw research that came out of the project is pretty pervasive today though. Those was an example of a forward looking project, though - and OS/2 continued to be developed with OS/2 Warp (or 3) getting released in 1994. It included IBM Works, which came with a word processor that wasn’t Microsoft Word, a spreadsheet that wasn’t Microsoft Excel, and a database that wasn’t Microsoft Access. Works wouldn’t last past 1996. After all, Microsoft had Charles Simony by then. He’d invented the GUI word processor at Xerox PARC and was light years ahead of the Warp options. And the Office Suite in general was gaining adoption fast. Warp was faster than previous releases, had way more options, and even browser support for early Internet adopters. But by then Windows 95 had taken the market by storm and OS/2 would see a rapidly declining customer base. After spending nearly a billion dollars a year on OS development, IBM would begin downsizing once the battle with Microsoft was lost. Over 1,300 people. And as the number of people dropped, defects with the code grew and the adoption dropped even faster. OS/2 would end in 2001. By then it was clear that IBM had lost the exploding PC market and that Windows was the dominant operating system in use. IBM’s control of the PC had slowly eroded and while they eeked out a little more profit from the PC, they would ultimately sell the division that built and marketed computers to Lenovo in 2005. Lenovo would then enjoy the number one spot in the market for a long time. The blue ocean had resulted in lower margins though, and IBM had taken a different, more services-oriented direction. OS/2 would live on. IBM discontinued support in 2006. It should have probably gone fully open source in 2005. It had already been renamed and rebranded as eComStation first by an IBM Business Partner called Serenity. It would go opensource(ish) and openoffice.org would be included in version two in 2010. Betas of 2.2 have been floating around since 2013 but as with many other open source compilations of projects, it seems to have mostly fizzled out. Ed Iacobucci would go on to found or co-found other companies, including Citrix, which flourishes to this day. So what really happened here. It would be easy, but an over-simplification to say that Microsoft just kinda’ took the operating system. IBM had a vision of an operating system that, similar to the Mac OS, would work with a given set of hardware. Microsoft, being an independent software developer with no hardware, would obviously have a different vision, wanting an operating system that could work with any hardware - you know, the original open architecture that allowed early IBM PCs to flourish. IBM had a big business suit and tie corporate culture. Microsoft did not. IBM employed a lot of computer scientists. Microsoft employed a lot of hackers. IBM had a large bureaucracy, Microsoft could build an operating system like NT mostly based on hiring a single brilliant person and rapidly building an elite team around them. IBM was a matrixed organization. I’ve been told you aren’t an enterprise unless you’re fully matrixed. Microsoft didn’t care about all that. They just wanted the marketshare. When Microsoft abandoned OS/2, IBM could have taken the entire PC market from them. But I think Microsoft knew that the IBM bureaucracy couldn’t react quickly enough at an extremely pivotal time. Things were moving so fast. And some of the first real buying tornados just had to be reacted to at lightning speeds. These days we have literature and those going through such things can bring in advisors or board members to help them. Like the roles Marc Andreeson plays with Airbnb and others. But this was uncharted territory and due to some good, shrewd and maybe sometimes downright bastardly decisions, Microsoft ended up leap-frogging everyone by moving fast, sometimes incurring technical debt that would take years to pay down, and grabbing the market at just the right time. I’ve heard this story oversimplified in one word: subterfuge. But that’s not entirely fair. When he was hired in 1993, Louis Gerstner pivoted IBM from a hardware and software giant into a leaner services organization. One that still thrives today. A lot of PC companies came and went. And the PC business infused IBM with the capital to allow the company to shoot from $29 billion in revenues to $168 billion just 9 years later. From the top down, IBM was ready to leave red oceans and focus on markets with fewer competitors. Microsoft was hiring the talent. Picking up many of the top engineers from the advent of interactive computing. And they learned from the failures of the Xeroxes and Digital Equipments and IBMs of the world and decided to do things a little differently. When I think of a few Microsoft engineers that just wanted to build a better DOS sitting in front of a 60 page refinement of how a feature should look, I think maybe I’d have a hard time trying to play that game as well. I’m all for relentless prioritization. And user testing features and being deliberate about what you build. But when you see a limited window, I’m OK acting as well. That’s the real lesson here. When the day needs seizing, good leaders will find a way to blow up the establishment and release the team to go out and build something special. And so yah, Microsoft took the operating system market once dominated by CP/M and with IBM’s help, established themselves as the dominant player. And then took it from IBM. But maybe they did what they had to do… Just like IBM did what they had to do, which was move on to more fertile hunting grounds for their best in the world sales teams. So tomorrow, think of bureaucracies you’ve created or had created to constrain you. And think of where they are making the world better vs where they are just giving some controlling jackrabbit a feeling of power. And then go change the world. Because that is what you were put on this planet to do. Thank you so much for listening in to this episode of the history of computing podcast. We are so lucky to have you.


TiVo

     12/31/2019

TiVo is a computer. To understand the history, let’s hop in our trusty time machine. It’s 1997. England gives Hong Kong back to China, after 156 years of British rule. The Mars Pathfinder touches down on Mars. The OJ Simpson trials are behind us, but the civil suit begins. Lonely Scottish scientists clone a sheep and name it Dolly. The first Harry Potter book is published. Titanic is released. Tony Blair is elected the Prime Minister of Great Britain. Hanson sang Mmmm Bop. And Pokemon is released. No not Pokemon Go, but Pokemon. The world was changing. The Notorious BIG was gunned down not far from where I was living at the time. Blackstreet released No Diggity. Third Eye Blind led a Semi-Charmed life and poppy grunge killed grunge grunge. And television. Holy buckets. Friends, Seinfeld, X Files, ER, Buff and the Vampire Slayer, Frasier, King of the Hill, Dharma and Greg, South Park, The Simpsons, Stargate, Home Improvement, Daria, Law and Order, Oz, Roseanne, The View, The Drew Carey Show, Family Matters, Power Rangers, JAG, Tenacious D, Lois and Clark, Spawn. Mosaic the first web browser, was released, Sergey Brin and Larry Page registered a weird domain name called Google because BackRub just seemed kinda’ weird. Facebook, craigslist, and Netflix were also purchased. Bill Gates became the richest business nerd in the world. DVDs were released. The hair was big. But commercials were about to become a thing of the past. So were cords. 802.11, also known as Wi-Fi, became a standard. Microsoft bought WebTV, but something else was about to happen that would forever change the way we watched television. We’d been watching television for roughly the same way for about 70 years. Since January 13th in 1928, when the General Electric factory in Schenectady, New York broadcast as WGY Television, using call letters W2XB. That was for experiments, but they launched W2XBS a little later, now known as WNBC. They just showed a Felix the Cat spinning around on a turntable for 2 hours a day to test stuff. A lot of testing around different markets were happening and The Queen’s Messenger would be the first drama broadcast on television in LA later that year. But it wasn’t until 1935 that the BBC started airing regular content and the late 1930s that regular programming started in the US, spreading slowly throughout the world, with Japan being one of the last countries to get a regular broadcast in 1953. So for the next several decades a love affair began with humans and their televisions. Color came to prime time in 1972, after the price of color TVs introduced over the couple of decades before started to come down in price. Entire industries sprang up around the television, or at least migrated from newspapers and radio to television. Moon landings, football, baseball, the news, game shows. Since that 1972 introduction of color tv, the microcomputer revolution had come. Computers were getting smaller. Hard drive capacity was growing. I could stroll down to the local Fry’s and buy a Western Digital, IBM Deskstar, Seagate Barracuda, an HP Kitty Hawk, or even a 10,000 RPM Cheetah. But the cheaper drives had come down enough for mass distribution. And so it was when Time Warner, a major US cable company at the time, decided to test a digital video system. They tapped Silicon Graphics alumni Jim Barton and Mike Ramsay to look into a set top box, or network appliance, or something. After initial testing, Time Warner didn’t think it was quite the right time to build nation-wide. They’d spent $100 million dollars testing the service in Orlando. So the pair struck out on their own. Silicon Valley was abuzz about set top boxes, now that the web was getting big, dialup was getting easy, and PCs were pretty common fare. Steve Perlman’s WebTV got bought by Microsoft for nearly half a billion dollars. Which became MSN TV and played the foundation for the Xbox hardware. I remember well that the prevailing logic of the time was that the set top box was the next big thing. The lagerts would join the Internet revolution. Grandma and Grandpa would go online. So Ramsay and Barton got a check for $3M from VC firms to further develop their idea. They founded a company called Teleworld and started running public trials of a new device that came out of their research, called TiVo. The set top box would go beyond television and be a hub for home networking, managing refrigerators, thermostats, manage your television, order a grocery delivery, and even bring the RFC for an internet coffee pot to life! But they were a little before their time on some of this. After some time, they narrowed the focus to a television receiver that could record content. The VC firms were so excited they ponied up another $300 million dollars to take the product to market. Investors even asked how long it would take the TV networks to shut them down. Disruption was afoot. When Ramsay and Barton approached Apple, Claris and Lucas Arts veteran Randy Komisar, he suggested they look at charging for a monthly service. But he, as with the rest of Silicon Valley, bought their big idea, especially since Komisar had sat on the board of WebTV. TiVo would need to raise a lot of money to ink deals with the big content providers of the time. They couldn’t alienate the networks. No one knew, but the revolution in cutting the cord was on the way. Inking deals with those providers would prove to be much more expensive than building the boxes. They set about raising capital. They inked deals with Sony, Philips, Philips, and announced a release of the first TiVo at the Consumer Electronics Show in January of 1999. They’d built an outstanding executive team. They’d done their work. And on March 31st, 1999, a Blue Moon, they released the Series 1 for about $500 and with a $9.95 monthly subscription fee. The device would use a modem to download tv show listings, which would later be replaced with an Ethernet, then Wi-Fi option. The Series1, like Apple devices at the time, would sport a PowerPC processor. Although this one was a 403GCX that only clocked in at 54 MHz - but cheap enough for an embedded system like this. It also came with 32 MB of RaM, a 13 to 60 gig IDE/ATA drive, and would convert analog signal into MPEG-2, storing from 14 to 60 hours of television programming. Back then, you could use the RCA cables or S-Video. They would go public later that year, raising 88 million dollars and nearly doubling in value overnight. By 2000 TiVo was in 150,000 homes and burning through cash far faster than they were making it. It was a huge idea and if big ideas take time to percolate, huge ideas take a lot of time. And a lot of lawsuits. In order to support the new hoarder mentality they were creating, The Series2 would come along in 2002 and would come with up to a 250 gig drive, USB ports, CPUs from 166 to 266 MHz, from 32 to 64 megs of RAM, and the MPEG encoder got moved off to the Broadcom BCM704x chips. In 2006, the Series 3 would introduce HD support, add HDMI, 10/100 Ethernet, and support drives of 2 terabytes with 128 megs of RAM. Ramsay left the company in 2007 to go work at Venture Partners. Barton, the CTO, would leave in 2012. Their big idea had been realized. They weren’t needed any more. Ramsay and Barton would found streaming service Qplay, but that wouldn’t make it over two years. By then, TiVo had become a verb. Series4 brought us to over a thousand hours of television and supported bluetooth, custom apps, and sport a Broadcom 400 MHZ dual core chip. But it was 2010. Popular DVD subscription service Netflix had been streaming and now had an app that could run on the Series 4. So did Rhapsody, Hulu, and YouTube. The race was on for streaming content. TiVo was still aiming for bigger, faster, cheaper set top boxes. But people were consuming content differently. TiVo gave apps, but Apple TV, Roku, Amazon, and other vendors were now in the same market for a fraction of the cost and without a subscription. By 2016 TiVo was acquired by Rovi for 1.1 Billion dollars and as is often the case in these kinds of scenarios seems listless. Direction… Unknown. After such a disruptive start, I can’t imagine any innovation will ever recapture that spirit from the turn of the millennia. And so in December of 2019 (the month I’m recording this episode), after months trying to split TiVo into two companies so they could be sold separately TiVo scrapped that idea and merged with Xperi. I find that we don’t talk about Tivo much any more. That doesn’t mean they’ve gone anywhere, just that the model has shifted over the years. According to TechCrunch “TiVo CEO David Shull noted also that Xperi’s annual licensing business includes over 100 million connected TV units, and relationships with content providers, CE manufacturers, and automotive OEMs, which now benefit from TiVo’s technology.” TiVo was a true disruptor. Along with Virtual CEO Randy Komisar, they sold Silicon Valley on Monthly Recurring Revenue as a key performance indicator. They survived the .com bubble and even thrived in it. They made television interactive. They didn’t cut our cords, but they expanded our minds so we could cut them. They introduced the idea of responsibly selling customer data as a revenue stream to help keep those fees in check. And in so doing, they let manufacturers micro market goods and services. They revolutionized the way we consume content. Something we should all be thankful for. So next time you’re binging a show from one of your favorite providers, just think about the fact that you might have to spend time with your family or friends if it weren’t for TiVo. You owe them a huge thanks.


Stewart Brand: Hippy Godfather of the Interwebs

     12/7/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at the impact Stewart Brand had on computing. Brand was one of the greatest muses of the interactive computing and then the internet revolutions. This isn’t to take anything away from his capacity to create, but the inspiration he provided gave him far more reach than nearly anyone in computing. There’s a decent chance you might not know who he his. There’s even a chance that you’ve never heard of any of his creations. But you live and breath some of his ideas on a daily basis. So who was this guy and what did he do? Well, Stewart Brand was born in 1938, in Rockford, Illinois. He would go on to study biology at Stanford, enter the military and then study design and photography at other schools in the San Francisco area. This was a special time in San Francisco. Revolution was in the air. And one of the earliest scientific studies had him legitimately dosing on LSD. One of my all-time favorite books was The Electric Kool-Aid Acid Test, by Tom Wolfe. In the book, Wolfe follows Ken Kesey and his band of Merry Pranksters along a journey of LSD and Benzedrine riddled hippy goodness, riding a converted school bus across the country and delivering a new kind of culture straight out of Haight-Ashbury and to the heart of middle America. All while steering clear of the shoes FBI agents of the day wore. Here he would have met members of the Grateful Dead, Neal Cassady, members of the Hells Angels, Wavy Gravy, Paul Krassner, and maybe even Kerouac and Ginsberg. This was a transition from the Beat Generation to the Hippies of the 60s. Then he started the Whole Earth Catalog. Here, he showed the first satallite imagery of the planet Earth, which he’d begun campaigning NASA to release two years earlier. In the 5 years he made the magazine, he spread ideals like ecology, a do it yourself mentality, self-sufficiency, and what the next wave of progress would look like. People like Craig Newmark of Craig’s List would see the magazine and it would help to form a new world view. In fact, the Whole Earth Catalog was a direct influence on Craig’s List. Steve Jobs compared the Whole Earth Catalog to a 60s era Google. It inspired Wired Magazine. Earth Day would be created two years later. Brand would loan equipment and inspire spinoffs of dozens of magazines and books. And even an inspiration for many early websites. The catalog put him in touch with so, so many influential people. One of the first was Doug Engelbart and The Mother Of All Demos involves him in the invention of the mouse and the first video conferencing. In fact, Brand helped produce the Mother Of All Demos! As we moved into the 70s he chronicled the oncoming hacker culture, and the connection to the 60s-era counterculture. He inspired and worked with Larry Brilliant, Lee Felsenstein, and Ted Nelson. He basically invented being a “futurist” founding CoEvolution Quarterly and spreading the word of digital utopianism. The Whole Earth Software Review would come along with the advent of personal computers. The end of the 70s would also see him become a special advisor to former California governor Jerry Brown. In the 70s and 80s, he saw the Internet form and went on to found one of the earliest Internet communities, called The WELL, or Whole Earth Lectronic Link. Collaborations in the WELL gave us Barlow’s The Electronic Frontier Foundation, a safe haunt for Kevin Mitnick while on the run, Grateful Dead tape trading, and many other Digerati. There would be other virtual communities and innovations to the concept like social networks, eventually giving us online forums, 4chan, Yelp, Facebook, LinkedIn, and corporate virtual communities. But it started with The Well. He would go on to become a visiting scientist in the MIT Media Lab, organize conferences, found the Global Business Network with Peter Schwarts, Jay Ogilvy and other great thinkers to help with promoting values and various planning like scenario planning, a corporate strategy that involves thinking from the outside in. This is now a practice inside Deloitte. The decades proceeded on and Brand inspired whole new generations to leverage humor to push the buttons of authority. Much as the pranksters inspired him on the bus. But it wasn’t just anti-authority. It was a new and innovative approach in an upcoming era of maximizing short-term profits at the expense of the future. Brand founded The Long Now Foundation with an outlook that looked 10,000 years in the future. They started a clock on Jeff Bezos’ land in Texas, they started archiving languages approaching extinction, Brian Eno led seminars about long-term thinking, and inspired Anathem, a novel from one of my favorite authors, Neal Stephenson. Peter Norton, Pierre Omidyar, Bruce Sterling, Chris Anderson of the Economist and many others are also involved. But Brand inspired other counter-cultures as well. In the era of e-zines, he inspired Jesse Dresden, who Brand knew as Jefferson Airplane Spencer Drydens kid. The kid turned out to be dFx, who would found HoHo Con an inspiration for DefCon. Stewart Brand wrote 5 books in addition to the countless hours he spent editing books, magazines, web sites, and papers. Today, you’ll find him pimping blockchain and cryptocurrency, in an attempt to continue decentralization and innovation. He inherited a playful counter-culture. He watched the rise and fall and has since both watched and inspired the innovative iterations of countless technologies, extending of course into bio-hacking. He’s hobnobbed with the hippies, the minicomputer timeshares, the PC hackers, the founders of the internet, the tycoons of the web, and then helped set strategy for industry, NGOs, and governments. He left something with each. Urania was the muse of astronomy, some of the top science in ancient Greece. And he would probably giggle if anyone compared him to the muse. Both on the bus in the 60s, and in his 80s today. He’s one of the greats and we’re lucky he graced us with his presence on this rock - that he helped us see from above for the first time. Just as I’m lucky you elected to listen to this episode. So next time you’re arguing about silly little things at work, think about what really matters and listen to one of his Ted Talks. Context. 10,000 years. Have a great week and thanks for listening to this episode of the History of Computing Podcast.


BASIC

     11/24/2019

BASIC Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at the computer that was the history of the BASIC programming language. We say BASIC but really BASIC is more than just a programming language. It’s a family of languages and stands for Beginner’s All-purpose Symbolic Instruction Code. As the name implies it was written to help students that weren’t math nerds learn how to use computers. When I was selling a house one time, someone was roaming around in my back yard and apparently they’d been to an open house and they asked if I’m a computer scientist after they saw a dozen books I’d written on my bookshelf. I really didn’t know how to answer that question We’ll start this story with Hungarian John George Kemeny. This guy was pretty smart. He was born in Budapest and moved to the US with his family in 1940 when his family fled anti-Jewish sentiment and laws in Hungary. Some of his family would go on to die in the Holocaust, including his grandfather. But safely nestled in New York City, he would graduate high school at the top of his class and go on to Princeton. Check this out, he took a year off to head out to Los Alamos and work on the Manhattan Project under Nobel laureate Richard Feynman. That’s where he met fellow Hungarian immigrant Jon Von Neumann - two of a group George Marx wrote about in his book on great Hungarian Emmigrant Scientists and thinkers called The Martians. When he got back to Princeton he would get his Doctorate and act as an assistant to Albert Einstein. Seriously, THE Einstein. Within a few years he was a full professor at Dartmouth and go on to publish great works in mathematics. But we’re not here to talk about those contributions to the world as an all around awesome place. You see, by the 60s math was evolving to the point that you needed computers. And Kemeny and Thomas Kurtz would do something special. Now Kurtz was another Dartmoth professor who got his PhD from Princeton. He and Kemeny got thick as thieves and wrote the Dartmouth Time-Sharing System (keep in mind that Time Sharing was all the rage in the 60s, as it gave more and more budding computer scientists access to those computer-things that prior to the advent of Unix and the PC revolution had mostly been reserved for the high priests of places like IBM. So Time Sharing was cool, but the two of them would go on to do something far more important. In 1956, they would write DARSIMCO, or Dartmouth Simplified Code. As with Pascal, you can blame Algol. Wait, no one has ever heard of DARSIMCO? Oh… I guess they wrote that other language you’re here to hear the story of as well. So in 59 they got a half million dollar grant from the Alfred P. Sloan foundation to build a new department building. That’s when Kurtz actually joined the department full time. Computers were just going from big batch processed behemoths to interactive systems. They tried teaching with DARSIMCO, FORTRAN, and the Dartmouth Oversimplified Programming Experiment, a classic acronym for 1960s era DOPE. But they didn’t love the command structure nor the fact that the languages didn’t produce feedback immediately. What was it called? Oh, so in 1964, Kemeny wrote the first iteration of the BASIC programming language and Kurtz joined him very shortly thereafter. They did it to teach students how to use computers. It’s that simple. And as most software was free at the time, they released it to the public. We might think of this as open source-is by todays standards. I say ish as Dartmouth actually choose to copyright BASIC. Kurtz has said that the name BASIC was chosen because “We wanted a word that was simple but not simple-minded, and BASIC was that one.” The first program I wrote was in BASIC. BASIC used line numbers and read kinda’ like the English language. The first line of my program said 10 print “Charles was here” And the computer responded that “Charles was here” - the second program I wrote just added a second line that said: 20 goto 10 Suddenly “Charles was here” took up the whole screen and I had to ask the teacher how to terminate the signal. She rolled her eyes and handed me a book. And that my friend, was the end of me for months. That was on an Apple IIc. But a lot happened with BASIC between 1964 and then. As with many technologies, it took some time to float around and evolve. The syntax was kinda’ like a simplified FORTRAN, making my FORTRAN classes in college a breeze. That initial distribution evolved into Dartmouth BASIC, and they received a $300k grant and used student slave labor to write the initial BASIC compiler. Mary Kenneth Keller was one of those students and went on to finish her Doctorate in 65 along with Irving Tang, becoming the first two PhDs in computer science. After that she went off to Clarke College to found their computer science department. The language is pretty easy. I mean, like PASCAL, it was made for teaching. It spread through universities like wildfire during the rise of minicomputers like the PDP from Digital Equipment and the resultant Data General Nova. This lead to the first text-based games in BASIC, like Star Trek. And then came the Altair and one of the most pivotal moments in the history of computing, the porting of BASIC to the platform by Microsoft co-founders Bill Gates and Paul Allen. But Tiny BASIC had appeared a year before and suddenly everyone needed “a basic.” You had Commodore BASIC, BBC Basic, Basic for the trash 80, the Apple II, Sinclair and more. Programmers from all over the country had learned BASIC in college on minicomputers and when the PC revolution came, a huge part of that was the explosion of applications, most of which were written in… you got it, BASIC! I typically think of the end of BASIC coming in 1991 when Microsoft bought Visual Basic off of Alan Cooper and object-oriented programming became the standard. But the things I could do with a simple if, then else statement. Or a for to statement or a while or repeat or do loop. Absolute values, exponential functions, cosines, tangents, even super-simple random number generation. And input and output was just INPUT and PRINT or LIST for source. Of course, functional programming was always simpler and more approachable. So there, you now have Kemeny as a direct connection between Einstein and the modern era of computing. Two immigrants that helped change the world. One famous, the other with a slightly more nuanced but probably no less important impact in a lot of ways. Those early BASIC programs opened our eyes. Games, spreadsheets, word processors, accounting, Human Resources, databases. Kemeny would go on to chair the commission investigating Three Mile Island, a partial nuclear meltdown that was a turning point in nuclear proliferation. I wonder what Kemeny thought when he read the following on the Statue of Liberty: Give me your tired, your poor, Your huddled masses yearning to breathe free, The wretched refuse of your teeming shore. Perhaps, like many before and after, he thought that he would breathe free and with that breath, do something great, helping bring the world into the nuclear era and preparing thousands of programmers to write software that would change the world. When you wake up in the morning, you have crusty bits in your eyes and things seem blurry at first. You have to struggle just a bit to get out of bed and see the sunrise. BASIC got us to that point. And for that, we owe them our sincerest thanks. And thank you dear listeners, for your contributions to the world in whatever way they may be. You’re beautiful. And of course thank you for giving me some meaning on this planet by tuning in. We’re so lucky to have you, have a great day!


The Altair 8800

     9/19/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on Agile Software Development. Agile software development is a methodology, or anti-methodology, or approach to software development that evolves the requirements a team needs to fulfill and the solutions they need to build in a collaborative, self-organized, and cross-functional way. Boy, that’s a lot to spit out there. I was in an elevator the other day and I heard someone say: “That’s not very agile.” And at that moment, I knew that I just couldn’t help but do an episode on agile. I’ve worked in a lot of teams that use a lot of variants of agile, scrum, Kanban, scrumban, Extreme Programing, Lean Software Development. Some of these are almost polar opposites and you still hear people talk about what is agile and if they want to make fun of people doing things an old way, they’ll say something like waterfall. Nothing ever was waterfall, given that you learn on the fly, find re-usable bits or hit a place where you just say that’s not possible. But that’s another story. The point here is that agile is, well, weaponized to back up what a person wants someone to do. Or how they want a team to be run. And it isn’t always done from an informed point of view. Why is Agile an anti-methodology? Think of it more like a classification maybe. There were a number of methodologies like Extreme Programming, Scrum, Kanban, Feature Driven Development, Adaptive Software Development, RAD, and Lean Software Development. These had come out to bring shape around a very similar idea. But over the course of 10-20 years, each had been developed in isolation. In college, I had a computer science professor who talked about “adaptive software development” from his days at a large power company in Georgia back in the 70s. Basically, you are always adapting what you’re doing based on speculation of how long something will take, collaboration on that observation and what you learn while actually building. This shaped how I view software development for years to come. He was already making fun of Waterfall methodologies, or a cycle where you write a large set of requirements and stick to them. Waterfall worked well if you were building a computer to land people on the moon. It was a way of saying “we’re not engineers, we’re software developers.” Later in college, with the rapid proliferation of the Internet and computers into dorm rooms I watched the emergence of rapid application development, where you let the interface requirements determine how you build. But once someone weaponized that by putting a label on it, or worse forking the label into spiral and unified models, then they became much less useful and the next hot thing had to come along. Kent Beck built a methodology called Extreme Programming - or XP for short - in 1996 and that was the next hotness. Here, we release software in shorter development cycles and software developers, like police officers on patrol work in pairs, reviewing and testing code and not writing each feature until it’s required. The idea of unit testing and rapid releasing really came out of the fact that the explosion of the Internet in the 90s meant people had to ship fast and this was also during the rise of really main-stream object-oriented programming languages. The nice thing about XP was that you could show a nice graph where you planned, managed, designed, coded, and tested your software. The rules of Extreme Programming included things like “Code the unit test first” - and “A stand up meeting starts each day.” Extreme Programming is one of these methodologies. Scrum is probably the one most commonly used today. But the rest, as well as the Crystal family of methodologies, are now classified as Agile software development methodologies. So it’s like a parent. Is agile really just a classification then? No. So where did agile come from? By 2001, Kent Beck, who developed Extreme Programming met with Ward Cunningham (who built WikiWikiWeb, the first wiki), Dave Thomas, a programmer who has since written 11 books, Jeff Sutherland and Ken Schwaber, who designed Scrum. Jim Highsmith, who developed that Adaptive Software Development methodology, and many others were at the time involved in trying to align an organizational methodology that allowed software developers to stop acting like people that built bridges or large buildings. Most had day jobs but they were like-minded and decided to meet at a quaint resort in Snowbird, Utah. They might have all wanted to use the methodologies that each of them had developed. But if they had all been jerks then they might not have had a shift in how software would be written for the next 20+ years. They decided to start with something simple, a statement of values; instead of Instead of bickering and being dug into specific details, they were all able to agree that software development should not be managed in the same fashion as engineering projects are run. So they gave us the Manifesto for Agile Software Development… The Manifesto reads: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: * Individuals and interactions over processes and tools * Working software over comprehensive documentation * Customer collaboration over contract negotiation * Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. But additionally, the principles dig into and expand upon some of that adjacently. The principles behind the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity--the art of maximizing the amount of work not done--is essential. The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Many of the words here are easily weaponized. For example, “satisfy the customer.” Who’s the customer? The product manager? The end user? The person in an enterprise who actually buys the software? The person in that IT department that made the decision to buy the software? In the scrum methodology, the customer is not known. The product owner is their representative. But the principles should need to identify that, just use the word so each methodology makes sure to cover it. Now take “continuous delivery.” People frequently just lump CI in there with CD. I’ve heard continuous design, continuous improvement, continuous deployment, continuous podcasting. Wait, I made the last one up. We could spend hours going through each of these and identifying where they aren’t specific enough. Or, again, we could revel in their lack of specificity by pointing us into the direction of a methodology where these words get much more specific meanings. Ironically, I know accounting teams at very large companies that have scrum masters, engineering teams for big projects with a project manager and a scrum master, and even a team of judges that use agile methodologies. There are now scrum masters embedded in most software teams of note. But once you see Agile on the cover of The Harvard Business Review, you hate to do this given all the classes in agile/XP/scrum - but you have to start wondering what’s next? For 20 years, we’ve been saying “stop treating us like engineers” or “that’s waterfall.” Every methodology seems to grow. Right after I finished my PMP I was on a project with someone else that had just finished theirs. I think they tried to implement the entire Project management Body of Knowledge. If you try to have every ceremony from Scrum, you’re not likely to even have half a day left over to write any code. But you also don’t want to be like the person on the elevator, weaponizing only small parts of a larger body of work, just to get your way. And more importantly, to admit that none of us have all the right answers and be ready to, as they say in Extreme Programming: Fix XP when it breaks - which is similar to Boyd’s Destruction and Creation, or the sustenance and destruction in Lean Six-Sigma. Many of us forget that last part: be willing to walk away from the dogma and start over. Thomas Jefferson called for a revolution every 20 years. We have two years to come up with a replacement! And until you replace me, thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!


Wikipedia

     9/2/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of Wikipedia. The very idea of a single location that could store all the known information in the world began with Ptolemy I, founder of the Greek dynasty that ruled Egypt following the death of Alexander the great. He and his son amassed 100s of thousands of scrolls in the Library and Alexandria from 331 BC and on. The Library was part of a great campus of the Musaeum where they also supported great minds starting with Ptolemy I’s patronage of Euclid, the father of geometry, and later including Archimedes, the father of engineering, Hipparchus, the founder of trigonometry, Her, the father of math, and Herophilus, who gave us the scientific method and countless other great hellenistic thinkers. The Library entered into a slow decline that began with the expulsion of intellectuals from Alexandria in 145BC. Ptolemy VIII was responsible for that. Always be weary of people who attack those that they can’t win over especially when they start blaming the intellectual elite for the problems of the world. This began a slow decline of the library until it burned, first with a small fire accidentally set by Caesar in 48BC and then for good in the 270s AD. In the centuries since there have been attempts here and there to gather great amounts of information. The first known encyclopedia was the Naturalis Historiae by Pliny the Elder, never completed because he was killed in the eruption of Vesuvius. One of the better known being the Encyclopedia Britannica, starting off in 1768. Mass production of these was aided by the printing press but given that there’s a cost to producing those materials and a margin to be made in the sale of those materials that encouraged a somewhat succinct exploration of certain topics. The advent of the computer era of course led to encyclopedias on CD and then to online encyclopedias. Encyclopedias at the time employed experts in certain fields and paid them for compiling and editing articles for volumes that would then be sold. As we say these days, this was a business model just waiting to be disrupted. Jimmy Wales was moderating an online discussion board on Objectivism and happened across Larry Sanger in the early 90s. They debated and became friends. Wales started Nupedia, which was supposed to be a free encyclopedia, funded by advertising revenue. As it was to be free, they were to recruit thousands of volunteer editors. People of the caliber that had been previously hired to research and write articles for encyclopedias. Sanger, who was pursuing a PhD in philosophy from Ohio State University, was hired on as editor-in-chief. This was a twist on the old model of compiling an encyclopedia and a twist that didn’t work out as intended. Volunteers were slow to sign up, but Nupedia went online in 2000. Later in the year there had only been two articles that made it through the review process. When Sanger told Ben Kovitz about this, he recommended looking at the emerging wiki culture. This had been started with WikiWikiWeb, developed by Ward Cunningham in 1994, named after a shuttle bus that ran between airport terminals at the Honolulu airport. WikiWikiWeb had been inspired by Hypercard but needed to be multi-user so people could collaborate on web pages, quickly producing content on new patterns in programming. He wanted to make non-writers feel ok about writing. Sanger proposed using a wiki to be able to accept submissions for articles and edits from anyone but still having a complicated review process to accept changes. The reviewers weren’t into that, so they started a side project they called Wikipedia in 2001 with a user-generated model for content, or article, generation. The plan was to generate articles on Wikipedia and then move or copy them into Nupedia once they were ready. But Wikipedia got mentioned on Slashdot. In 2001 there were nearly 30 million websites but half a billion people using the web. Back then a mention on the influential Slashdot could make a site. And it certainly helped. They grew and more and more people started to contribute. They hit 1,000 articles in March of 2001 and that increased by 10 fold by September, By And another 4 fold the next year. It started working independent of Nupedia. The dot-com bubble burst in 2000 and by 2002 Nupedia had to lay Sanger off and he left both projects. Nupedia slowly died and was finally shut down in 2003. Eventually the Wikimedia Foundation was built to help unlock the world’s knowledge, which now owns and operates Wikipedia. Wikimedia also includes Commons for media, Wikibooks that includes free textbooks and manuals, Wikiquote for quotations, Wikiversity for free learning materials, MediaWiki the source code for the site, Wikidata for pulling large amounts of data from Wikimedia properties using APIs, Wikisource, a library of free content, Wikivoyage, a free travel guide, Wikinews, free news, Wikispecies, a directory containing over 687,000 species. Many of the properties have very specific ways of organizing data, making it easier to work with en masse. The properties have grown because people like to be helpful and Wales allowed self-governance of articles. To this day he rarely gets involved in the day-to-day affairs of the wikipedia site, other than the occasional puppy dog looks in banners asking for donations. You should donate. He does have 8 principles the site is run by: 1. Wikipedia’s success to date is entirely a function of our open community. 2. Newcomers are always to be welcomed. 3. “You can edit this page right now” is a core guiding check on everything that we do. 4. Any changes to the software must be gradual and reversible. 5. The open and viral nature of the GNU Free Documentation License and the Create Commons Attribution/Share-Alike License is fundamental to the long-term success of the site. 6. Wikipedia is an encyclopedia. 7. Anyone with a complaint should be treated with the utmost respect and dignity. 8. Diplomacy consists of combining honesty and politeness. This culminates in 5 pillars wikipedia is built on: 1. Wikipedia is an encyclopedia. 2. Wikipedia is written from a neutral point of view. 3. Wikipedia is free content that anyone can use, edit, and distribute. 4. Wikipedia’s editors should treat each other with respect and civility. 5. Wikipedia has no firm rules. Sanger went on to found Citizendium, which uses real names instead of handles, thinking maybe people will contribute better content if their name is attached to something. The web is global. Throughout history there have been encyclopedias produced around the world, with the Four Great Books of Song coming out of 11th century China, the Encyclopedia of the Brethren of Purity coming out of 10th century Persia. When Wikipedia launched, it was in English. Wikipedia launched a German version using the deutsche.wikipedia.com subdomain. It now lives at de.wikipedia.com and Wikipedia has gone from being 90% English to being almost 90 % non-English, meaning that Wikipedia is able to pull in even more of the world’s knowledge. Wikipedia picked up nearly 20,000 English articles in 2001, over 75,000 new articles in 2002, and that number has steadily climbed wreaching over 3,000,000 by 2010, and we’re closing in on 6 Million today. The English version is 10 terabytes of data uncompressed. If you wanted to buy a printed copy of wikipedia today, it would be over 2500 books. By 2009 Microsoft Encarta shut down. By 2010 Encyclopedia Britannica stopped printing their massive set of books and went online. You can still buy encyclopedias from specialty makers, such as the World Book. Ironically, Encyclopedia Britannica does now put real names of people on articles they produce on their website, in an ad-driven model. There are a lot of ads. And the content isn’t linked to as many places nor as thorough. Creating a single location that could store all the known information in the world seems like a pretty daunting task. Compiling the non-copywritten works of the world is now the mission of Wikipedia. The site receives the fifth most views per month and is read by nearly half a billion people a month with over 15 billion page views per month. Anyone who has gone down the rabbit hole of learning about Ptolemy I’s involvement in developing the Library of Alexandria and then read up on his children and how his dynasty lasted until Cleopatra and how… well, you get the point… can understand how they get so much traffic. Today there are over 48,000,000 articles and over 37,000,000 registered users who have contributed articles meaning if we set 160 Great Libraries of Alexandria side-by-side we would have about the same amount of information Wikipedia has amassed. And it’s done so because of the contributions of so many dedicated people. People who spend hours researching and building pages, undergoing the need to provide references to cite the data in the articles (btw wikipedia is not supposed to represent original research), more people to patrol and look for content contributed by people on a soapbox or with an agenda, rather than just reporting the facts. Another team looking for articles that need more information. And they do these things for free. While you can occasionally see frustrations from contributors, it is truly one of the best things humanity has done. This allows us to rediscover our own history, effectively compiling all the facts that make up the world we live in, often linked to the opinions that shape them in the reference materials, which include the over 200 million works housed at the US Library of Congress, and over 25 million books scanned into Google Books (out of about 130 million). As with the Great Library of Alexandria, we do have to keep those who seek to throw out the intellectuals of the world away and keep the great works being compiled from falling to waste due to inactivity. Wikipedia keeps a history of pages, to avoid revisionist history. The servers need to be maintained, but the database can be downloaded and is routinely downloaded by plenty of people. I think the idea of providing an encyclopedia for free that was sponsored by ads was sound. Pivoting the business model to make it open was revolutionary. With the availability of the data for machine learning and the ability to enrich it with other sources like genealogical research, actual books, maps, scientific data, and anything else you can manage, I suspect we’ll see contributions we haven’t even begun to think about! And thanks to all of this, we now have a real compendium of the worlds knowledge, getting more and more accurate and holistic by the day. Thank you to everyone involved, from Jimbo and Larry, to the moderators, to the staff, and of course to the millions of people who contribute pages about all the history that makes up the world as we know it today. And thanks to you for listening to yet another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day! Note: This work was produced in large part due to the compilation of historical facts available at https://en.wikipedia.org/wiki/History_of_Wikipedia


Once Upon A Friendster

     8/17/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on former Social Networking pioneer, Friendster. Today when you go to friendster.com you get a page that the social network is taking a break. The post was put up in 2018. How long did Rip Van Winkle Sleep? But what led to the rise of the first big social network and well, what happened? The story begins in 1973. Talkomatic was a chat room and was a hit in the PLATO or Programmed Logic for Automatic Teaching Operations community at the University of Illinois, an educational learning system that had been running since 1960. Dave Woolley and Douglas Brows at the University of Illinois brought chat and then the staff built TERM-Talk the same year, adding screen sharing and PLATO Notes would be added where you could add notes to your profile. This was the inspiration for the name of Lotus Notes. Then in the 80s came Bulletin Board Systems, 84 brought FidoNet, 88 brought IRC, 96 brought ICQ, and in 96 we got Bolt.com, the first social networking and video website with SixDegrees coming in 1997 as the first real social media website. AOL Instant Messenger showed up the same year and AOL bought ICQ in 99. It was pretty sweet that I didn’t have to remember all those ICQ numbers any more! 1999 - Yahoo! And Microsoft got in the game launching tools called Messenger at about the same time and LiveJournal came along, as well as Habbo, a social networking site for games. By 2001 Six Degrees shut down and Messenger was shipped with XP. But 2002. That was the year the Euro hit the street. Before England dissed it. That was the year Israeli and Palestinian conflicts escalated. Actually, that’s a lot of years, regrettably. I remember scandals at Enron and Worldcom well that year, ultimate resulting in Sarbanes Oxley to counter the more than 5 trillion dollars in corporate scandals that sent the economy into a tailspin. My Georgia Bulldogs football team beat Arkansas to win the SEC title and then beat Florida State in the Sugar Bowl. Nelly released Hot In Here and Eminem released Lose Yourself and Without Me. If film, Harry Potter was searching for the Chamber of Secrets and Frodo was on a great trek to the Two Towers. Eminem was in the theaters as well with 8 Mile. And Friendster was launched by Jonathan Abrams in Mountain View California. They wanted to get people making new friends and meeting in person. It was an immediate hit and people flocked to the site. They grew to three million users in just a few months, catching the attention of investors. As a young consultant, I loved keeping track of my friends who I never got to see in person using Friendster. Napster was popular at the time and the name Friendster came from a mashup of friends and Napster. With this early success, Friendster took $12 million dollars in funding from VC firm Kleiner Perkins Caufield & Byers, Benchmark Capital the next year. That was the year a Harvard student named Mark Zuckerburg launched FaceMash with his roommate Eduardo Saverin for Harvard students in a kinda’ “Hot or Not” game. They would later buy Instagram as a form of euphoric recall, looking back on those days. Google has long wanted a social media footprint and tried to buy Friendster in 2003, but when rejected launched Orkut in 2004 - which just ran in Brazil, tried Google Friend Connect in 2008, which lasted until 2012, Google Buzz, which launched in 2010 and only lasted a year, Google Wave, which launched in 2009 and also only lasted a year, and of course, Google + which ran from 2011 to 2019. Google is back at it again with a new social network called Shoelace out of their Area 120 incubator. The $30 million dollars in Google stock would be worth a billion dollars today. MySpace was also launched in 2003 by Chris DeWolfe and Tom Anderson, growing to have more traffic than Google over time. But Facebook launched in 2004 and after having problems keeping the servers up and running, Friendster's board replaced Abrams as CEO and moved him to chairmen of the board. He was replaced by Scott Sassa. And then in 2005 Sassa was replaced by Taek Kwn and then he was replaced by Kent Lindstrom who was replaced by Richard Kimber. Such rapid churn in the top spot means problems. A rudderless ship. In 2006 they added widgets to keep up with MySpace. They didn’t. They also opened up a developer program and opened up APIs. They still had 52 million unique visitors worldwide in June 2008. But by then, MySpace had grown to 7 times their size. MOL Global, an online payments processor from Malaysia bought the company in 2009 and relaunched the site. All user data was erased and Friendster provided an export tool to move data to other popular sites at the time, such as Flickr. In 2009 Friendster had 3 Million unique visitors per day. They relaunched But that dropped to less than a quarter million by the end of 2010. People abandoned the network. What happened? Facebook eclipsed the Friendster traffic in 2009. Friendster became something more used in Asia than the US. Really, though, I remember early technical problems. I remember not being able to log in, so moving over to MySpace. I remember slow loading times. And I remember more and more people spending time on MySpace, customizing their MySpace page. Facebook did something different. Sure, you couldn’t customize the page, but the simple layout loaded fast and was always online. This reminds me of the scene in the show Silicon Valley, when they have to grab the fire extinguisher because they set the house on fire from having too much traffic! In 2010, Facebook acquired Friendster's portfolio of social networking patents for $40 million dollars. In 2011, Newscorp sold MySpace for $35 million dollars after it had been valued at it peak in 2008. After continuing its decline, Friendster was sold to a social gaming site in 2015, trying to capitalize on the success that Facebook had doing online gaming. But after an immediate burst of users, it too was not successful. In 2018 the site finally closed its doors. Today Friendster is the 651,465th ranked site in the world. There are a few thing to think about when you look at the Friendster story: 1. The Internet would not be what it is today without sites like Friendster to help people want to be on it. 2. The first company on a new thing isn’t always the one that really breaks through 3. You have to, and I mean, have to keep your servers up. This is a critical aspect of maintaining you’re momentum. I was involved with one of the first 5 facebook apps. And we had no idea 2 million people would use that app in the weekend it was launched. We moved mountains to get more servers and clusters brought online and refactored sql queries on the fly, working over 70 hours in a weekend. And within a week we hit 10 million users. That app paid for dozens of other projects and was online for years. 4. When investors move in, the founder usually gets fired at the first sign of trouble. Many organizations simply can’t find their equilibrium after that and flounder. 5. Last but not least: Don’t refactor every year, but if you can’t keep your servers up, you might just have too much technical debt. I’m sure everyone involved with Friendster wishes they could go back and do many things differently. But hindsight is always 20/20. They played their part in the advent of the Internet. Without early pioneers like Friendster we wouldn’t be where we are at today. As Heinlein said, “yet another crew of Rip Van Winkle’s” But Buck Rogers eventually did actually wake back up, and maybe Friendster will as well. Thank you for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!


The History Of Android

     8/22/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today we’re going to look at the emergence of Google’s Android operating system. Before we look at Android, let’s look at what led to it. Frank Canova who built a device he showed off as “Angler” at COMDEX in 1992. This would be released as the Simon Personal Communicator by BellSouth and manufactured as the IBM Simon by Mitsubishi. The Palm, Newton, Symbian, and Pocket PC, or Windows CE would come out shortly thereafter and rise in popularity over the next few years. CDMA would slowly come down in cost over the next decade. Now let’s jump to 2003. At the time, you had Microsoft Windows CE, the Palm Treo was maturing and supported dual-band GSM, Handspring merged into the Palm hardware division, Symbian could be licensed but I never met a phone of theirs I liked. Like the Nokia phones looked about the same as many printer menu screens. One other device that is more relevant because of the humans behind it was the T-Mobile sidekick, which actually had a cool flippy motion to open the keyboard! Keep that Sidekick in mind for a moment. Oh and let’s not forget a fantastic name. The mobile operating systems were limited. Each was proprietary. Most were menu driven and reminded us more of an iPod, released in 2001. I was a consultant at the time and remember thinking it was insane that people would pay hundreds of dollars for a phone. At the time, flip phones were all the rage. A cottage industry of applications sprung up, like Notify, that made use of app frameworks on these devices to connect my customers to their Exchange accounts so their calendars could sync wirelessly. The browsing experience wasn’t great. The messaging experience wasn’t great. The phones were big and clunky. And while you could write apps for the Symbian in Qt Creator or Flash Lite or Python for S60, few bothered. That’s when Andy Rubin left Danger, the company the cofounded that made the Sidekick and joined up with Rich Miner, Nick Sears, and Chris White in 2003 to found a little company called Android Inc. They wanted to make better mobile devices than were currently on the market. They founded Android Inc and set out to write an operating system based on Linux that could rival anything on the market. Rubin was no noob when cofounding Danger. He had been a robotics engineer in the 80s, a manufacturing engineer at Apple for a few years and then got on his first mobility engineering gig when he bounced to General Magic to work on Magic Cap, a spinoff from Apple FROM 92 TO 95. He then helped build WebTV from 95-99. Many in business academia have noted that Android existed before Google and that’s why it’s as successful as it is today. But Google bought Android in 2005, years before the actual release of Android. Apple had long been rumor milling a phone, which would mean a mobile operating system as well. Android was sprinting towards a release that was somewhat Blackberry-like, focused on competing with similar devices on the market at the time, like the Blackberries that were all the rage. Obama and Hillary Clinton was all about theirs. As a consultant, I was stoked to become a Blackberry Enterprise Server reseller and used that to deploy all the things. The first iPhone was released in 2007. I think we sometimes think that along came the iPhone and Blackberries started to disappear. It took years. But the fall was fast. While the iPhone was also impactful, the Android-based devices were probably more-so. That release of the iPhone kicked Andy Rubin in the keister and he pivoted over from the Blackberry-styled keyboard to a touch screen, which changed… everything. Suddenly this weird innovation wasn’t yet another frivolous expensive Apple extravagance. The logo helped grow the popularity as well, I think. Internally at Google Dan Morrill started creating what were known as Dandroids. But the bugdroid as it’s known was designed by Irina Blok on the Android launch team. It was eventually licensed under Creative Commons, which resulted in lots of different variations of the logo; a sharp contrast to the control Apple puts around the usage of their own logo. The first version of the shipping Android code came along in 2008 and the first phone that really shipped with it wasn’t until the HTC Dream in 2009. This device had a keyboard you could press but also had a touch screen, although we hadn’t gotten a virtual keyboard yet. It shipped with an ARM11, 192MB of RAM, and 256MB of storage. But you could expand it up to 16 gigs with a microSD card. Oh, and it had a trackball. It bad 802.11b and g, Bluetooth, and shipped with Android 1.0. But it could be upgraded up to 1.6, Donut. The hacker in me just… couldn’t help but mod the thing much as I couldn’t help but jailbreak the iPhone back before I got too lazy not to. Of course, the Dev Phone 1 shipped soon after that didn’t require you to hack it, something Apple waited until 2019 to copy. The screen was smaller than that of an iPhone. The keyboard felt kinda’ junky. The app catalog was lacking. It didn’t really work well in an office setting. But it was open source. It was a solid operating system and it showed promise as to the future of not-Apple in a post-Blackberry world. Note: Any time a politician uses a technology it’s about 5 minutes past being dead tech. Of Blackberry, iOS, and Android, Android was last in devices sold using those platforms in 2009, although the G1 as the Dream was also known as, took 9% market share quickly. But then came Eclair. Unlike sophomore efforts from bands, there’s something about a 2.0 release of software. By the end of 2010 there were more Androids than iOS devices. 2011 showed the peak year of Blackberry sales, with over 50 million being sold, but those were the lagerts spinning out of the buying tornado and buying the pivot the R&D for the fruitless next few Blackberry releases. Blackberry marketshare would zero out in just 6 short years. iPhone continued a nice climb over the past 8 years. But Android sales are now in the billions per year. Ultimately the blackberry, to quote Time a “failure to keep up with Apple and Google was a consequence of errors in its strategy and vision.” If you had to net-net that, touch vs menus was a substantial part of that. By 2017 the Android and iOS marketshare was a combined 99.6%. In 2013, now Google CEO, Sundar Pichai took on Android when Andy Rubin was embroiled in sexual harassment charges and now acts as CEO of Playground Global, an incubator for hardware startups. The open source nature of Android and it being ready to fit into a device from manufacturers like HTC led to advancements that inspired and were inspired by the iPhone leading us to the state we’re in today. Let’s look at the released per year and per innovation: * 1.0, API 1, 2008: Include early Google apps like Gmail, Maps, Calendar, of course a web browser, a media player, and YouTube * 1.1 came in February the next year and was code named Petit Four * 1.5 Cupcake, 2009: Gave us on an-screen keyboard and third-party widgets then apps on the Android Market, now known as the Google Play Store. Thus came the HTC Dream. Open source everything. * 1.6 Donut, 2009: Customizeable screen sizes and resolution, CDMA support. And the short-lived Dell Streak! Because of this resolution we got the joy of learning all about the tablet. Oh, and Universal Search and more emphasis on battery usage! * 2.0 Eclair, 2009: The advent of the Motorola Droid, turn by turn navigation, real time traffic, live wallpapers, speech to text. But the pinch to zoom from iOS sparked a war with Apple.We also got the ability to limit accounts. Oh, new camera modes that would have impressed even George Eastman, and Bluetooth 2.1 support. * 2.2 Froyo, four months later in 2010 came Froyo, with under-the-hood tuning, voice actions, Flash support, something Apple has never had. And here came the HTC Incredible S as well as one of the most mobile devices ever built: The Samsung Galaxy S2. This was also the first hotspot option and we got 3G and better LCDs. That whole tethering, it took a year for iPhone to copy that. * 2.3 Gingerbread: With 2010 came Gingerbread. The green from the robot came into the Gingerbread with the black and green motif moving front and center. More sensors, NFC, a new download manager, copy and paste got better, * 3.0 Honeycomb, 2011. The most important thing was when Matias Duarte showed up and reinvented the Android UI. The holographic design traded out the green and blue and gave you more screen space. This kicked off a permanet overhaul and brought a card-UI for recent apps. Enter the Galaxy S9 and the Huawei Mate 2. * 4.0 Ice Cream Sandwich, later in 2011 - Duarte’s designs started really taking hold. For starters, let’s get rid of buttons. THat’s important and has been a critical change for other devices as well. We Reunited tablets and phones with a single vision. On screen buttons, brought the card-like appearance into app switching. Smarter swiping, added swiping to dismiss, which changed everything for how we handle email and texts with gestures. You can thank this design for Tinder. * 4.1 to 4.3 Jelly Bean, 2012: Added some sweet sweet fine tuning to the foundational elements from Ice Cream Sandwich. Google Now that was supposed to give us predictive intelligence, interactive notifications, expanded voice search, advanced search, sill with the card-based everything now for results. We also got multiuser support for tablets. And the Android Quick Settings pane. We also got widgets on the lock screen - but those are a privacy nightmare and didn’t last for long. Automatic widget resizing, wireless display projection support, restrict profiles on multiple user accounts, making it a great parent device. Enter the Nexus 10. AND TWO FINGER DOWN SWIPES. * 4.4 KitKat, in 2013 ended the era of a dark screen, lighter screens and neutral highlights moved in. I mean, Matrix was way before that after all. OK, Google showed up. Furthering the competition with Apple and Siri. Hands-free activation. A panel on the home screen, and a stand-alone launcher. AND EMOJIS ON THE KEYBOARD. Increased NFC security. * 5. Lollipop came in 2014 bringing 64 bit, Bluetooth Low Energy, flatter interface, But more importantly, we got annual releases like iOS. * 6: Marshmallow, 2015 gave us doze mode, sticking it to iPhone by even more battery saving features. App security and prompts to grant apps access to resources like the camera and phone were . The Nexus 5x and 6P ports brought fingerprint scanners and USB-C. * 7: Nougat in 2016 gave us quick app switching, a different lock screen and home screen wallpaper, split-screen multitasking, and gender/race-centric emojis. * 8: Oreo in 2017 gave us floating video windows, which got kinda’ cool once app makers started adding support in their apps for it. We also got a new file browser, which came to iOS in 2019. And more battery enhancements with prettied up battery menus. Oh, and notification dots on app icons, borrowed from Apple. * 9: Pie in 2018 brought notch support, navigations that were similar to those from the iPhone X adopting to a soon-to-be bezel-free world. And of course, the battery continues to improve. This brings us into the world of the Pixel 3. * 10, Likely some timed in 2019 While the initial release of Android shipped with the Linux 2.1 kernel, that has been updated as appropriate over the years with, 3 in Ice Cream Sandwich, and version 4 in Nougat. Every release of android tends to have an increment in the Linux kernel. Now, Android is open source. So how does Google make money? Let’s start with what Google does best. Advertising. Google makes a few cents every time you click on an ad in an advertisement in messages or web pages or any other little spot they’ve managed to drop an ad in there. Then there’s the Google Play Store. Apple makes 70% more revenue from apps than Android, despite the fact that Android apps have twice the number of installs. The old adage is if you don’t pay for a product, you are the product. I don’t tend to think Google goes overboard with all that, though. And Google is probably keeping Caterpillar in business just to buy big enough equipment to move their gold bars from one building to the next on campus. Any time someone’s making money, lots of other people wanna taste. Like Oracle, who owns a lot of open source components used in Android. And the competition between iOS and Android makes both products better for consumers! Now look out for Android Auto, Android Things, Android TV, Chrome OS, the Google Assistant and others - given that other types of vendors can make use of Google’s open source offerings to cut R&D costs and get to market faster! But more importantly, Android has contributed substantially to the rise of ubiquitious computing despite how much money you have. I like to think the long-term impact of such a democratization of Mobility and the Internet will make the world a little less idiocracy and a little more wikipedia. Thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!


Coherent Is Not UNIX!

     5/17/2020

In the current day Linux is the most widely used UNIX-like operating system. It's rise to prominence has been an amazing success story. From it's humble beginnings Linux has grown to power everything from super computers to car stereos. But it's not the first UNIX clone. A much earlier system existed, called Coherent. And as it turns out both Linux and Coherent share a lot of similarities. The biggest difference being that Coherent was closed source.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1973: AT&T UNIX V4 Goes Public
1949: DOJ Sues AT&T Over Antitrust Violations
1975: AT&T UNIX V6 Released
1977: First Version of BSD Circulates
1977: XYBASIC Released by Mark Williams Company
1980: Coherent Released for PDP/11
1983: Coherent Comes to the IBM PC/XT
1995: Mark Williams Company Closes


Spam, Email, and Best Intentions

     10/4/2020

Spam emails are a fact of modern life. Who hasn't been sent annoying and sometimes cryptic messages from unidentified addresses? To understand where spam comes from we need to look at the origins of email itself. Email has had a long and strange history, so too have some of it's most dubious uses.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


JOVIAL, the Evolution of Programming

     9/6/2020

The creation of FORTRAN and early compilers set the stage to change computing forever. However, they were just the start of a much longer process. Just like a spoken language, programming languages have morphed and changed over time. Today we are looking at an interesting case of this slow evolution. JOVIAL was developed during the Cold War for use in the US Military, and it's been in constant small-scale use ever since. It's story gives us a wonderful insight into how programming language change over time, and why some stick around while others die out.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Making Disks Flexible, Part 2

     3/8/2020

The floppy disk is one of the most iconic pieces of technology. While not in use in the modern day there was a period of 40 years where the floppy disk was synonymous with data storage. Today we pick up where we finished in the last episode, with the rise and fall of the 5 1/4 inch disk. We will be looking at the creation and spread of the 3 1/2 inch floppy disk. How did Sony, a non-player in the computer market, create this run away success? And how did Apple contribute to it's rise?

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1980: Sony Invents Microfloppy Disk
1983: Apple Builds Prototype MAC with 3 1/2 Inch Floppy


Applesoft BASIC, Microsoft and Apple's First Collaboration

     4/19/2020

It's easy to think of Apple and Microsoft as bitter rivals, but that's not always the case. The two companies have a very complicated relationship, and a very long history. This connection goes all the way back to the 1970s and a product called Applesoft BASIC. It would become stock software on nearly every Apple II computer ever sold, it kept Apple competitive in the early home computer market, and it may have saved Microsoft from bankruptcy.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1997: Bill Gates saves Apple from Bankruptcy
1976: Apple I hits shelves, Integer BASIC soon follows
1977: Apple II Released
1978: AppleSoft BASIC Ships


Road to Transistors, Part II

     6/14/2020

In this episode we finish up our look at the birth of the transistor. But to do that we have to go back to 1880, the crystal radio detector, and examine the development of semiconductor devices. Once created the transistor would change not just how computers worked, but change how they could be used. That change didn't happen over night, and it would take even longer for the transistor to move from theory to reality.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1939: Russel Ohl Discovers P-N Junction
1947: Point Contact Transistor Invented at Bell Labs
1954: TRADIC, First Transistorized Computer, Built


Learning Along the Oregon Trail

     9/20/2020

We've all played the Oregon Trail, but what do you know about it's origins? First developed as a mainframe program all the way back in 1971, the Oregon Trail was intended as an educational game first and foremost. In fact, it traces its linage to some of the first efforts to get computers into the classroom. Today we are following the trail back to it's source and seeing how the proper environment was built to create this classic game.

You can play the 1975 version here: https://archive.org/details/OregonTrailMainframe 

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Vectrex, Playing With Vectors

     4/5/2020

The 1980s were a turbulent and fast-moving decade for the video game industry. There were huge success stories, rapid advancements in technology, and the North American Video Game Crash. Caught up in all of this was an ambitious machine called the Vectrex. In an era dominated by pixelated graphics the Vectrex brought higher resolution vector images and early 3D to market. But ultimately it would be swept away during the market's crash. Today we are taking a dive into the development of the Vectrex, what made it different, and how it survives into the modern day.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Becoming Portable

     6/28/2020

Portable computing is now totally ubiquitous. There's a good chance you are listening to this episode on a tiny portable computer right now. But where did it all come from? As it turns out the first portable computer was designed all the way back in 1972. This machine, the DynaBook, only ever existed on paper. Despite that handicap, in the coming years it would inspire a huge shift in both personal and portable computing.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1972: DynaBook designed by Alan Kay

1976: NoteTaker project starts

1982: GRiD Compass released


PCM, Origins of Digital Audio

     5/3/2020

Every day we are inundated with digital audio: phone calls, music, even this podcast. Digitized sound has become so ubiquitous that it often fades into the background. What makes this all possible is a technology called Pulse Code Modulation, or PCM. This isn't new technology, its roots trace all the way back to 1937. So how exactly did digital audio come into being well before the first digital computers?

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1937: PCM Developed by Alec Reeves
1941: Germany Cracks A-3 Code
1943: Bell Labs Develops SIGSALY(aka The Green Hornet)
1957: First PCM Synthesizer, MUSIC I, Programmed by Max Mathews


A Guided Tour of the Macintosh

     5/10/2020

In this byte sized episode I take a look at a pack in that came with the first Macintosh. Along side Apple stickers, manuals, and the computer itself there was a single cassette tape labeled "A Guided Tour of the Macintosh". The purpose? It's a strange addition to the Mac's packing, but a great example of Apple's attention to detail and ingenuity.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1984: A Guided Tour of the Macintosh Released


8080 VS Z80

     7/12/2020

In 1974 Intel released the 8080 processor, a chip long in the making. It was the first microprocessor that had the right combination of power and price to make personal computers viable. But that same year a small group of employees defected and formed their own company called Zilog. Among this group were Masatoshi Shima and Federico Faggin, two of the principal architects behind the 8080 as well as Intel's other processors. Zilog would go on to release a better chip, the Z80, that blew Intel out of the water. Today we continue our Intel series with a look into this twisting story.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important Dates:

1974: Intel 8080 hits shelves

1976: Zilog Z80 goes on sale


Making Disks Flexible, Part 1

     2/24/2020

The floppy disk was a ubiquitous technology for nearly 40 years. From mainframes to home computers, the plastic disk was everywhere. And in the decades it was around there were very few changes made to how it fundamentally worked. So how did it get so popular? What made the floppy disk so flexible? And how did it finally fall out of favor? In this episode we will look at the technology's early days.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1971: 8 Inch Floppy Disk(Minnow) Created at IBM
1976: Shugart Invents 5 1/4 Inch Floppy Disk


Brad Chase Interview, Marketing Lead for Windows 95 and Much More

     7/5/2020

I recently got the chance to sit down and talk with Microsoft alumni Brad Chase. He was the product manager for Microsoft Works on the Macintosh, DOS 5, DOS 6, and the marketing lead for Windows 95 as well as much more. We talk about the Apple-Microsoft relationship, the groundbreaking launch of Windows 95, and what it takes to sell software.

Editing for this episode was handled by Franck, you can follow him on instagram: www.instagram.com/frc.audio/

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Memex and Hyperlinks

     3/22/2020

The widespread use of the internet has shaped our world, it's hard do imagine the modern day without it. One of the biggest featured would have to be the hyperlink. But despite the modern net feeling so new, links actually date back as far as the 1930s and the creation of the Memex: a machine that was never built but would influence coming generations of dreamers.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1927: Differential Analyzer Built at MIT
1938: Rapid Selector Built by Vannevar Bush
1945: As We May Think Published


The Rise of CP/M

     8/9/2020

The IBM PC and MS-DOS, the iconic duo of the early 80s. The two are so interconnected that it's hard to mention one without the other. But in 1980 DOS wasn't IBM's first choice for their soon-to-be flagship hardware. IBM had wanted to license Gary Kildall's CP/M, but in a strange series of events the deal fell through. Legend states that Kildall lost the contract b was too busy flying his private plane to talk business with IBM, but is that true? Today we look at the development of CP/M, why it was a big deal, and why the PC ultimately shipped with Microsoft software.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing


Analog Computing and the Automatic Totalisator

     7/26/2020

A lot of the technology we associate with the modern day started on anachronistic machines. I'm not talking about mainframes, I'm talking older. Today we are looking at George Julius's Automatic Totalisator, an analog computer used to manage betting at horse tracks around the world. These were massively complex machines, some networked over 200 input terminals, and they did it all mechanically.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important Dates:

1913: Premier Tote installed in Auckland


Road to Transistors: Part I

     5/31/2020

The transistor changed the world. It made small, complex, and cheap computing possible. But it wasn't the first attempt to crack the case. There is a long and strange lineage of similar devices leading up to the transistor. In this episode we take a look at two of those devices. First the vacuum tube, one of the first components that made computing possible. Then the cryotron, the first device purpose built for computers.

You can find the full audio of Atanasoff's talk here: https://www.youtube.com/watch?v=Yxrcp1QSPvw

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1880: Thomas Edison Rediscovers Thermionic Emission
1904: Ambrose Fleming Invents the Vacuum Tube
1906: Lee de Forest Patents the Audion Triode Tube
1937: George Stibitz Creates First Binary Adding Circuit from Spare Relays
1938: John Atanasoff Visits a 'Honkey-Tonk'
1941: ABC, First Vacuum Tube Calculator, is Completed
1953: Cryotron Invented by Dudley Allen Buck


Evolution of the Mouse

     12/2/2019

The computer mouse is a ubiquitous device, it's also one of the least changed devices we use with a computer. The mice we use today have only seen small incremental improvements since the first mouse was developed. So how did such a long lasting design take shape, and how did it travel the decades up to now?

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1961: First Mouse Developed at Engelbart's ARC Lab
1972: Xerox Develops Rollerball Mouse for Alto
1979: Apple LISA Mouse Designed


Bill's Problem with Piracy

     11/25/2019

In this mini-episode we look at a strange event in Microsoft's early history and their first case of piracy. Along the way you will learn about the best advetrizing campaign in history: the MITS MOBILE Computer Caravan!

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1976: 'Open Letter to Hobbyists' Written by Bill Gates

http://tee.pub/lic/4jnwv7m_ZPw


Spacewar! (the Game)

     8/25/2019

It really seems like in the last decade video games have gone from a somewhat niche hobby to a widespread part of our culture. Nowadays, there are a multitude of ways to get out gaming fix. Consoles, handheld game systems, and even smartphones make video games more accessible than ever. But when and how exactly did video games start to creep into the modern consciousness?

In this episode we look at some of the earliest video games and how they came to be.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1962: Spacewar! Developed


Minitel, the French Network Connection

     9/22/2019

Today we are dipping back into the deep and complex history of the proto-internet. We are going to be looking at Minitel, a France-Wide-Web that was built in the 1980s as a way to help the country stay relevant in the digital age.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1980: Minitel Program Networks France


Minitel Research Lab Interview, with Julien Mailland and Kevin Driscoll

     9/29/2019

Today I am joined by Julien Mailland and Kevon Driscoll, co-authors of Minitel: Welcome to the Internet and proprietors of the Minitel Research Lab(minitel.us). We talk about their book, how they first started working on Minitel terminals, and the ongoing work to preserve Minitel.


Acorn and the BBC

     7/14/2019

The Raspberry Pi had been a huge success at its stated goals, and continues to be. But, this isn't the first time a British company would design and develop a computer as an accessible platform for learning programming. In fact, if you've read much about the Pi then you've probably seen people calling it a "BBC Micro 2".

 

So what was the BBC Micro? What did the BBC have to do with creating a new computer? And how is any of this connected to the 21st century version?

 

Today I want to share the story from a slice of a somewhat forgotten age: BBC's involvement with Acorn Computers and how they worked together to educate a generation of programmers. Along the way we will see how a small UK company created an impressive series of computers who's legacy may not be known in the States, but has had a surprising impact on the world.

 

Special thanks to Neil from Retro Man Cave for sharing his memories of the BBC Micro. You can find him on YouTube here: https://www.youtube.com/channel/UCLEoyoOKZK0idGqSc6Pi23w


Digital Voices

     6/16/2019

What are the origins of our modern day text-to-speech systems? In this episode we will dive into the rich history of electronic talking machines. Along the way I will tell you the story of the vocoder, the first singing computer, and a little about the father of modern synthesized speech.


Attack of the PC Clones

     6/30/2019

Today, I want to share with you the story of the first PC clones and how they cemented the rise of the x86 chipset.

 

Most of this story takes place between 1981 and 1984, but I think it's fair to say that these 3 years are some of the most influential for the PC's rise to domination. So lets start the story with a discussion of the IBM PC, how it was special, and then examine how reverse engineering it lead to the current x86 monoculture we see today.


Creeping Towards Viruses

     10/6/2019

Computer viruses today pose a very real threat. However, it turns out that their origins are actually very non-threatening. Today, we are going to look at some of the first viruses. We will see how they developed from technical writing, to pulp sci-fi, to traveling code.

I talk about The Scarred Man by Gregory Benford in this episode, you can read the full short story here: http://www.gregorybenford.com/extra/the-scarred-man-returns/

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1949: John Von Neumann Writes 'Theory and Organization of Complex Automata'
1969: 'The Scarred Man' Written by Gregory Benford, Coined Term 'Virus'
1971: Creeper Virus Unleashed


Space Travel!

     5/27/2019

In this mini-episode we talk about Space Travel, an obscure video game from 1969.


Journey to Altair

     9/8/2019

Today we are going to be traveling back to the late 1970s to take a look at the early days of the home computer. And specifically how Microsoft found a foothold at just the right time and place. And for Bill Gates and Paul Allen that would come in the form of BASIC.

Along the way we will cover the Altair 8800, vaporware, and how Bill Gates violated Harvard student conduct.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1974: Altari 8800 Released
1975: Microsoft BASIC Released


Cooking in Y2K

     1/6/2020

In this mini episode we will look at the Y2K bug, and some of the recipes it spawned. That's right, we are talking about Y2K cookbooks!

You can find all more Y2K compliant food here: https://web.archive.org/web/19991012032855/http://y2kkitchen.com/

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1999: Y2K Kitchen Hits Shelves


Networking for a Nuclear War, the Soviets

     7/28/2019

Often times people assume the US is the homeland of the internet. Funded by the US Department of Defence, the first attempts at a large-scale network were started during the height of the Cold War, and a large part of it's design was redundancy and robust-ness. Some of the researchers were quite frank about it's purpose: to create a network that could survive an upcoming nuclear war. This military-hardened infrastructure was known as ARPANET.


But that's only part of the story, and the US wasn't the first to the party. The fact is, the internet was born during the Cold War. This was an era that saw huge advancements in science, both for better and for worse. The space race put humans on the moon, and the nuclear arms race put humans dangerously close to annihilation. So it should be no surprise that America's counterpart in this age, the Soviet Union, was working towards their own proto-internet.


4004: The First Microprocessor

     11/4/2019

Intel is one of the dominant forces in the computer industry today, they may be most well known for their line of microprocessors. These chips have powered computers going back to the early days of microcomputers. How did Intel become so entrenched in the field? Well, it all started with the 4004 CPU, the first "one-chip" computer.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1971: Intel 4004 Released


Networking for a Nuclear War, the Americans

     8/11/2019

In this episode we are going to explore the ARPANET. This is a companion to the last episode, which covered contemporary Soviet attempts to create an early internet.

Like with last time, today we are still in the Cold War era. Now, this won't be a point by point comparison of Soviet to US networks. They are totally different beasts. Instead, what I want to do is look at how ARPANET was developed, what influenced it, and how it would kick start the creation of the internet.


Lost in the Colossal Cave

     10/20/2019

Colossal Cave Adventure is one of the most influential video games of all time. Originally written for the DEC PDP-10 mainframe in 1975 the game has not only spread to just about any computer out there, but it has inspired the entire adventure/RPG genera. In this episode we are going to look at how Adventure got it's start, how it evolved into a full game, and how it came to be a lunch title for the IBM PC.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1975: Colossal Cave Adventure Developed

http://tee.pub/lic/MKt4UiBp22g


Unix for the People, Part 2

     6/2/2019

Now, as the name suggests this is the second part of a series on the history of UNIX. Part 1 mainly covers the background leading up to UNIX. If you haven't listened to it yet, I strongly suggest you go do that now. A lot of what was covered in part 1 provides needed context for our discussion today.

 

Just as a quick recap, last time I told you about CTSS and Multics, two of the earliest time-sharing operating systems. Today, we are going to be picking up where we left off: Bell Labs just left Project MAC and decided to start their own time-sharing project. What they didn't realize was that this new project, called UNIX, would soon outshine all of its predecessors. But when this all started, in 1969 on a spare mainframe at Bell Labs, there was no hint at it's amazing future.


Going Rogue

     1/26/2020

Many video games today make use of randomized content, some more than others. It may seem like an obvious feature, but it turns out that procedural generation didn't really catch on in video games until the 1980 release of Rogue. The game itself never saw much commercial success, but was wildly popular among UNIX users. In this episode we look at Rogue, how it was created, and the legacy that we still see today.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Important dates in this episode:

1980: Rogue Written for PDP/11
1984: Rogue Ported to PC, Macintosh, Atari ST


Return of Viruses: The Spread

     10/18/2020

It's time to round out spook month with a return to one of last year's topics: the computer virus. Malicious code traveling over networks is actually a relatively new phenomenon, early viruses were much different. In this episode we examine ANIMAL and Elk Cloner, two early viruses that were meant as practical jokes and spread by hapless computer users. Along the way we will see cases of parallel evolution, name calling, and find out if there is any one origin to the word "virus".

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


IBM Gets Personal

     11/2/2020

This episode is not about the IBM PC. In 1981 the Personal Computer would change the world. Really, it's hard to talk about home computing without diving into it. But I've always had an issue with the traditional story. The PC didn't come out of left field, IBM had actually been trying to make a home computer for years. In 1981 those efforts would pay off, but the PC wasn't revolutionary hardware for Big Blue, it was evolutionary. So today we are looking at that run up with SCAMP, the 5100, and the Datamaster.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


From Antiquity to Bitcoin: A Brief History of Currency, Banking, and Finance

     11/8/2020

Today we’re going to have a foundational episode, laying the framework for further episodes on digital piracy, venture capital, accelerators, Bitcoin, PayPal, Square, and others. I’ll try to keep from dense macro and micro economics but instead just lay out some important times from antiquity to the modern financial system so we can not repeat all this in those episodes. I apologize to professionals in these fields whose life work I am about to butcher in oversimplification. 

Like a lot of nerds who found myself sitting behind a keyboard writing code, I read a lot of science fiction growing up. There are dystopian and utopian outlooks on what the future holds for humanity give us a peak into what progress is. Dystopian interpretations tell of what amount to warlords and a fragmentation of humanity back to what things were like thousands of years ago. The utopian interpretations often revolve around questions about how society will react to social justice, or a market in equilibrium.

The dystopian science fiction represents the past of economics and currency. And the move to online finances and digital currency tracks against what science fiction told us was coming in a future more utopian world. My own mental model of economics began with classes on micro and macro economics in college but evolved when I was living in Verona, Italy.

We visited several places built by a family called the Medici’s. I’d had bank accounts up until then but that’s the first time I realized how powerful banking and finance as an institution was. Tombs, villas, palaces. The Medici built lasting edifices to the power of their clan. They didn’t invent money, but they made enough to be on par with the richest modern families. 

It’s easy to imagine humans from the times of hunter-gatherers trading an arrowhead for a chunk of meat. As humanity moved to agriculture and farming, we began to use grain and cattle as currency. By 8000 BC people began using tokens for trade in the Middle East. And metal objects came to be traded as money around 5,000 BC.

And around 3,000 PC we started to document trade. Where there’s money and trade, there will be abuse. By 1,700 BC early Mesopotamian even issued early regulations for the banking industry in the Code of Hammurabi. By then private institutions were springing up to handle credit, deposits, interest, and loans. Some of which was handled on clay tablets. 

And that term private is important. These banking institutions were private endeavors. As the Egyptian empire rose, farmers could store grain in warehouses and then during the Ptolemeic era began to trade the receipts of those deposits. We can still think of these as tokens and barter items though. Banking had begun around 2000 BC in Assyria and Sumeria but these were private institutions effectively setting their own splintered and sometimes international markets. Gold was being used but it had to be measured and weighed each time a transaction was made. 

Until the Lydian Stater. Lydia was an empire that began in 1200 BC and was conquered by the Persians around 546 BC. It covered the modern Western Anatolia, Salihli, Manisa, and Turkey before the Persians took it. One of their most important contributions to the modern world was the first state sponsored coinage, in 700BC. The coins were electrum, which is a mix of gold and silver. 

And here’s the most important part. The standard weight was guaranteed by an official stamp. The Lydian king Croesus then added the concept of bimetallic coinage. Or having one coin made of gold and the other of silver. Each had a different denomination where the lower denomination was one dozen of the higher. They then figured out a way to keep counterfeit coins off the market with a Lydian stone, the color of which could be compared to other marks made by gold coins. And thus modern coinage was born. And the Lydian merchants became the merchants that helped move goods between Greece and Asia, spreading the concept of the coin. Cyrus the second defeated the Lydians and Darius the Great would issue the gold daric, with a warrior king wielding a bow. And so heads of state adorned coins. 

As with most things in antiquity, there are claims that China or India introduced coins first. Bronzed shells have been discovered in the ruins of Yin, the old capital of the Shang dynasty dating back hundreds of years before the Lydians. But if we go there this episode will be 8 hours long. 

Exodus 22:25-27 “If you lend money to my people—to any poor person among you—never act like a moneylender. Charge no interest.”

Let’s put that bible verse in context. So we have coins and banks. And international trade. It’s mostly based on the weight of the coins. Commerce rises and over the centuries banks got so big they couldn’t be allowed to fail without crashing the economy of an empire. Julius Caeser expands the empire of Rome and gold flows in from conquered lands. One thing that seems constant through history is that interest rates from legitimate lenders tend to range from 3 to 14 percent. Anything less and you are losing money. Anything more and you’ve penalized the borrower to the point they can’t repay the loan. The more scarce capital the more you have to charge. Like the US in the 80s. So old Julius meets an untimely fate, there are wars, and Augustus manages to solidify the empire and Augustus reformed taxes and introduced a lot of new services to the state, building roads, establishing a standing army, the Praetorian Guard, official fire fighting and police and established a lot of the old Roman road systems through the empire that Rome is now known so well for. It was an over 40 year reign and one of the greatest in history. But greatness is expensive. 

Tiberius had to bail out banks and companies in the year 33. Moneylending sucks when too many people can’t pay you back. Augustus had solidified the Roman Empire and by the time Tiberius came around Rome was a rich import destination. Money was being leant abroad and interest rates and so there was less and less gold in the city. Interest rates had plummeted to 4 percent. Again, we’re in a time when money is based on the weight of a coin and there simply weren’t enough coins in circulation due to the reach of the empire. And so for all my Libertarian friends - empires learned the hard way that business and commerce are essential services and must be regulated. If money cannot be borrowed then crime explodes. People cannot be left to starve. Especially when we don’t all live on land that can produce food any more. 

Any time the common people are left behind, there is a revolt. The more the disparity the greater the revolt. The early Christians were heavily impacted by the money lending practices in that era between Julius Caeser and Tiberius and the Bible as an economic textbook is littered with references to usury, showing the blame placed on emerging financial markets for the plight of the commoner. Progress often involves two steps forward and one back to let all of the people in a culture reap the rewards of innovations.  

The Roman Empire continued on gloriously for a long, long time. Over time, Rome fell. Other empires came and went. As they did, they minted coins to prove how important the ruling faction was. It’s easy to imagine a farmer in the dark ages following the collapse of the Roman Empire dying and leaving half of the farm to each of two children. Effectively each owns one share. That stock can then be used as debt and during the rise of the French empire, 12th century courretiers de change found they could regulate debts as brokers. The practice grew. 

Bankers work with money all day. They get crafty and think of new ways to generate income. The Venetians were trading government securities and in 1351 outlawed spreading rumors to lower the prices of those - and thus market manipulation was born. By 1409 Flemish traders began to broker the trading of debts in Bruges at an actual market. Italian companies began issuing shares and joint stock companies were born allowing for colonization of the American extensions to European powers. That colonization increased the gold supply in Europe five fold, resulting in the first great gold rush. 

European markets, flush with cash and speculation and investments, grew and by 1611 in Amsterdam the stock market was born. The Dutch East India Company sold shares to the public and brought us options, bonds and derivatives. Dutch perpetual bonds were introduced and one issued in 1629 is still paying dividends. So we got the bond market for raising capital. 

Over the centuries leading to the industrial revolution, banking, finance, and markets became the means with which capitalism and private property replaced totalitarian regimes, the power of monarchs, and the centralized control of production. As the markets rose, modern economics were born, with Adam Smith codifying much of the known works at that point, including those from French physiocrats. The gold standard began around 1696 and gained in popularity. The concept was to allow paper money to be freely convertible into a pre-defined amount of gold. Therefore, paper money could replace gold and still be backed by gold just as it was in antiquity. By 1789 we were running a bit low on gold so introduced the bimetallic standard where silver was worth one fifteenth of gold and a predefined market ratio was set.  

Great thinking in economics goes back to antiquity but since the time of Tiberius, rulers had imposed regulation. This had been in taxes to pay for public goods and bailing out businesses that had to get bailed out - and tariffs to control the movement of goods in and out of a country. To put it simply, if too much gold left the country, interest rates would shoot up, inflation would devalue the ability to buy goods and as people specialized in industries, those who didn’t produce food, like the blacksmiths or cobblers, wouldn’t be able to buy food. And when people can’t buy food, bad things happen. 

Adam Smith believed in self-regulation though, which he codified in his seminal work Wealth of Nations, in 1776. He believed that what he called the “invisible hand” of the market would create economic stability, which would lead to prosperity for everyone. And that became the framework for modern capitalistic endeavors for centuries to come. But not everyone agreed. Economics was growing and there were other great thinkers as well. 

Again, things fall apart when people can’t get access to food and so Thomas Malthus responded with a theory that the rapidly growing populations of the world would outgrow the ability to feed all those humans. Where Smith had focused on the demand for goods, Malthus focused on scarcity of supply. Which led to another economist, Karl Marx, to see the means of production as key to providing the Maslovian hierarchy. He saw capitalism as unstable and believed the creation of an owner (or stock trader) class and a working class was contrary to finding balance in society. He accurately predicted the growing power of business and how that power would control and so hurt the worker at the benefit of the business. We got marginalize, general equilibrium theory, and over time we could actually test theories and the concepts that began with Smith became a science, economics, with that branch known as neoclassical.

Lots of other fun things happen in the world. Bankers begin instigating innovation and progress. Booms or bull markets come, markets over index and/or supplies become scarce and recessions or bear markets ensue. Such is the cycle. To ease the burdens of an increasingly complicated financial world, England officially adopted the gold standard in 1821 which led to the emergence of the international gold standard, adopted by Germany in 1871 and by 1900, most of the world. Gaining in power and influence, the nations of the world stockpiled gold up until World War I in 1914. The international political upheaval led to a loss of faith in the gold standard and the global gold supply began to fall behind the growth in the global economy. 

JP Morgan dominated Wall Street in what we now called the Gilded age. He made money by reorganizing and consolidating railroad businesses throughout America. He wasn’t just the banker, he was the one helping become more efficient, digging into how the businesses worked and reorganizing and merging corporate structures. He then financed Edison’s research and instigated the creation of General Electric. He lost money investing on a Tesla project when Tesla wanted to go wireless. He bought Carnegie Steel in 1901, the first modern buyout that gave us US Steel. The industrialists from the turn of the century increased productivity at a rate humanity had never seen. We had the biggest boom market humanity had ever seen and then when the productivity gains slowed and the profits and earnings masked the slowdown in output a bubble of sorts formed and the market crashed in 1929. 

These markets are about returns on investments. Those require productivity gains as they are usually based margin, or the ability to sell more goods without increasing the cost - thus the need for productivity gains. That crash in 1929 sent panic through Wall Street and wiped out investors around the world. Consumer confidence, and so spending and investment was destroyed. With a sharp reduction needed in supply, industrial output faltered and workers were laid off, creating a vicious cycle. 

The crash also signaled the end of the gold standard. The pound and franc were mismanaged, commodity prices, new power Germany was having trouble repaying war debts, commodity prices collapsed, and thinking a reserve of gold would keep them legitimate, countries raised interest rates, further damaging the global economy. High interest rates reduce investment. England finally suspended the gold standard in 1931 which sparked  other countries to do the same, with the US raising the number of dollars per ounce of gold from $20 to $35 and so obtaining enough gold to back the US dollar as the de facto standard. 

Meanwhile, science was laying the framework for the next huge boom - which would be greater in magnitude, margins, and profits. Enter John Maynard Keynes and Keynesian economics, the rise of macroeconomics. In a departure from neoclassical economics he believed that the world economy had grown to the point that aggregate supply and demand would not find equilibrium without government intervention. In short, the invisible hand would need to be a visible hand by the government. By then, the Bolsheviks had established the Soviet Union and Mao had founded the communist party in China. The idea that there had been a purely capitalist society since the time the Egyptian government built grain silos or since Tiberius had rescued the Roman economy with bailouts was a fallacy. The US and other governments began spending, and incurring debt to do so, and we began to dig the world out of a depression.

But it took another world war to get there. And that war did more than just end the Great Depression. World War II was one of the greatest rebalancing of powers the world has known - arguably even greater than the fall of the Roman and Persian empires and the shift between Chinese dynasties. In short, we implemented a global world order of sorts in order to keep another war like that from happening. Globalism works for some and doesn’t work well for others. It’s easy to look on the global institutions built in that time as problematic. And organizations like the UN and the World Bank should evolve so they do more to lift all people up, so not as many around the world feel left behind. 

The systems of governance changed world economics.The Bretton Woods Agreement would set the framework for global currency markets until 1971. Here, all currencies were valued in relation to the US dollar which based on that crazy rebalancing move now sat on 75% of the worlds gold. The gold was still backed at a rate of $35 per ounce. And the Keynesian International Monetary Fund would begin managing the balance of payments between nations. Today there are 190 countries in the IMF

Just as implementing the gold standard set the framework that allowed the investments that sparked capitalists like JP Morgan, an indirect financial system backed by gold through the dollar allowed for the next wave of investment, innovation, and so productivity gains.

This influx of money and investment meant there was capital to put to work and so bankers and financiers working with money all day derived new and witty instruments with which to do so. After World War II, we got the rise of venture capital. These are a number of financial instruments that have evolved so qualified investors can effectively make bets on a product or idea. Derivatives of venture include incubators and accelerators. 

The best example of the early venture capital deals would be when Ken Olson and Harlan Anderson raised $70,000 in 1957 to usher in the age of transistorized computing. DEC rose to become the second largest computing company - helping revolutionize knowledge work and introduce a new wave of productivity gains and innovation. They went public in 1968 and the investor made over 500 times the investment, receiving $38 million in stock. More importantly, he stayed friends and a confidant of Olson and invested in over 150 other companies. 

The ensuing neoclassical synthesis of economics basically informs us that free markets are mostly good and efficient but if left to just Smith’s invisible hand, from time to time they will threaten society as a whole. Rather than the dark ages, we can continue to evolve by keeping markets moving and so large scale revolts at bay. As Aasimov effectively pointed out in Foundation - this preserves human knowledge. And strengthens economies as we can apply math, statistics, and the rising computers to help apply monetary rather than fiscal policy as Friedman would say, to keep the economy in equilibrium. 

Periods of innovation like we saw in the computer industry in the post-war era always seem to leave the people the innovation displaces behind. When enough people are displaced we return to tribalism, nationalism, thoughts of fragmentation, and moves back into the direction of dystopian futures. Acknowledging people are left behind and finding remedies is better than revolt and retreating from progress - and showing love to your fellow human is just the right thing to do. Not doing so creates recessions like the ups and downs of the market in the years as gaps between innovative periods formed.

The stock market went digital in 1966, allowing more and more trades to be processed every day. Instinet was founded in 1969 allowing brokers to make after hour trades. NASDAQ went online in 1970, removing the floor or trading market that had been around since the 1600s. And as money poured in, ironically gold reserves started to go down a little. Just as the Romans under Tiberius saw money leave the country as investment, US gold was moving to other central banks to help rebuild countries, mostly those allied with NATO, to rebuild their countries. But countries continued to release bank notes to pay to rebuild, creating a period of hyperinflation.

As with other times when gold became scarce, interest rates became unpredictable, moving from 3 to 17 percent and back again until they began to steadily decline in 1980. 

Gold would be removed from the London market in 1968 and other countries began to cash out their US dollars for gold. Belgium, the Netherlands, then Britain cashed in their dollars for gold, and much as had happened under the reign of Tiberius, there wasn’t enough to sustain the financial empires created. This was the turning point for the end of the informal links back to the gold standard. By 1971 Nixon was forced to sever the relationship between the dollar and gold and the US dollar, by then the global standard going back to the Bretton Woods Agreement, became what’s known as fiat money. The Bretton Woods agreement was officially over and the new world order was morphing into something else. Something that was less easily explainable to common people. A system where the value of currency was based not on the link to gold but based on the perception of a country, as stocks were about to move from an era of performance and productivity to something more speculative.

Throughout the 80s more and more orders were processed electronically and by 1996 we were processing online orders. The 2000s saw algorithmic and high frequency trading. By 2001 we could trade in pennies and the rise of machine learning created billionaire hedge fund managers. Although earlier versions were probably more just about speed. Like if EPS is greater than Expected EPS and guidance EPS is greater than EPS then buy real fast, analyze the curve and sell when it tops out. Good for them for making all the moneys but while each company is required to be transparent about their financials, the high frequency trading has gone from rewarding companies with high earnings to seeming like more a social science where the rising and falling was based on confidence about an industry and the management team.

It became harder and harder to explain how financial markets work. Again, bankers work with money all day and come up with all sorts of financial instruments to invest in with their time. The quantity and types of these became harder to explain. Junk bonds, penny stocks, and to an outsider strange derivatives. And so moving to digital trading is only one of the ways the global economy no longer makes sense to many. 

Gold and other precious metals can’t be produced at a rate faster than humans are produced. And so they had to give way to other forms of money and currency, which diluted the relationship between people and a finite, easy to understand, market of goods. 

As we moved to a digital world there were thinkers that saw the future of currency as flowing electronically. Russian cyberneticist Kitov theorized electronic payments and then came ATMs back in the 50s, which the rise of digital devices paved the way to finally manifest themselves over the ensuing decades. Credit cards moved the credit market into more micro-transactional, creating industries where shop-keepers had once kept debits in a more distributed ledger. As the links between financial systems increased and innovators saw the rise of the Internet on the way, more and more devices got linked up.

This combined with the libertarianism shown by many in the next wave of Internet pioneers led people to think of ways for a new digital currency. David Chaum thought up ecash in 1983, to use encrypted keys, much as PGP did for messages, to establish a digital currency. In 1998, Nick Szabo came up with the idea for what he called bitgold, a digital currency based on cryptographic puzzles and the solved puzzles would be sent to a public registry using a public key where the party who solved the puzzle would receive a private key. This was kinda’ like using a mark on a Lydian rock to make sure coins were gold. He didn’t implement the system but had the initial concept that it would work similar to the gold standard - just without a central authority, like the World Bank. 

This was all happening concurrently with the rise of ubiquitous computing, the move away from checking to debit and credit cards, and the continued mirage that clouded what was really happening in the global financial system. There was a rise in online e-commerce with various sites emerging to buy products in a given industry online. Speculation increased creating a bubble around Internet companies. That dot com bubble burst in 2001 and markets briefly retreated from the tech sector. 

Another bull market was born around the rise of Google, Netflix, and others. Productivity gains were up and a lot of money was being put to work in the market, creating another bubble. Markets are cyclical and need to be reigned back in from time to time. That’s not to minimize the potentially devastating impacts to real humans. The Global Financial Crisis of 2008 came along for a number of reasons, mostly tied to the bursting of a housing bubble to oversimplify the matter. The lack of liquidity with banks caused a crash and the lack of regulation caused many to think through the nature of currency and money in an increasingly globalized and digital world. After all, if the governments of the world couldn’t protect the citizenry of the world from seemingly unscrupulous markets then why not have completely deregulated markets where the invisible hand does so?

Which brings us to the rise of cryptocurrencies.

Who is John Galt? Bitcoin was invented by Satoshi Nakamoto, who created the first blockchain database and brought the world into peer-to-peer currency in 2009 when bitcoin .1 was released. Satoshi mined block 0 of bitcoin for 50 bitcoins. Over the next year Satoshi mined a potential of about a million bitcoins. Back then a bitcoin was worth less than a penny. As bitcoin grew and the number of bitcoins mined into the blockchain increased, the scarcity increased and the value skyrocketed reaching over $15 billion as of this writing. Who is Satoshi Nakamoto? No one knows - the name is a pseudonym. Other cryptocurrencies have risen such as Etherium. And the market has largely been allowed to evolve on its own, with regulators and traditional financiers seeing it as a fad. Is it? Only time will tell. 

There is about an estimated 200,000 tonnes of gold in the world worth about 93 trillion dollars if so much of it weren’t stuck in necklaces and teeth buried in the ground. The US sits on the largest stockpile of it today, at 8,000 tonnes worth about a third of a trillion dollars, then Germany, Italy, and France. By contrast there are 18,000,000 bitcoins with a value of about $270 billion, a little less than the US supply of gold. By contrast the global stock market is valued at over $85 trillion.

The global financial markets are vast. They include the currencies of the world and the money markets that trade those. Commodity markets, real estate, the international bond and equity markets, and derivative markets which include contracts, options, and credit swaps. This becomes difficult to conceptualize because as one small example in the world financial markets, over $190 billion is traded on stock markets a day. 

Seemingly, rather than running on gold reserves, markets are increasingly driven by how well they put debt to work. National debts are an example of that. The US National Debt currently stands at over $27 trillion dollars. Much is held by our people as bonds, although some countries hold some as security as well, including governments like Japan and China, who hold about the same amount of debt if you include Hong Kong with China. But what does any of that mean? The US GDP sits at about $22.3 trillion dollars. So we owe a little more than we make in a year. Much as many families with mortgages, credit cards, etc might owe about as much as they make. And roughly 10% of our taxes go to pay interest. Just as we pay interest on mortgages. 

Most of this is transparent. As an example, government debt is often held in the form of a treasury bond. The treasury.gov website lists who holds what bonds: https://ticdata.treasury.gov/Publish/mfh.txt. Nearly every market discussed here can be traced to a per-transaction basis, with many transactions being a matter of public record. And yet, there is a common misconception that people think the market is controlled by a small number of people. Like a cabal. But as with most perceived conspiracies, the global financial markets are much more complex. There are thousands of actors who think they are acting rationally who are simply speculating. And there are a few who are committing a crime by violating or inorganically manipulating markets, as has been illegal since the Venetians passed their first laws on the matter. Most day traders will eventually lose all of their money. Most market manipulators will eventually go to jail. But there’s a lot of grey in between. And that can’t entirely be planned for. 

At the beginning of this episode I mentioned it was a prelude to a deeper dive into digital piracy, venture capital, Bitcoin, PayPal, Square, and others. Piracy, because it potentially represents the greatest redistribution of wealth since the beginning of time. Baidu and Alibaba have made their way onto public exchanges. ANT group has the potential to be the largest IPO in history. Huawei is supposedly owned by employees. You can also buy stocks in Russian banking, oil, natural gas, and telecom. 

Does this mean that the split created when the ideas of Marx became a political movement that resulted in communist regimes is over? No. These have the potential of creating a bubble. One that will then need correcting, maybe even based on intellectual property damage claims. The seemingly capitalistic forays made by socialist or communist countries just go to show that there really isn’t and has never been a purely capitalist, socialist, or communist market. Instead, they’re spectrums separated by a couple of percentages of tax here and there to pay for various services or goods to the people that each nation holds as important enough to be universal to whatever degree that tax can provide the service or good. 

So next time you hear “you don’t want to be a socialist country, do you?” Keep in mind that every empire in history has simply been somewhere in a range from a free market to a state-run market. The Egyptians provided silos, the Lydians coined gold, the Romans built roads and bailed out banks, nations adopted gold as currency, then build elaborate frameworks to gain market equilibrium. Along the way markets have been abused and then regulated and then deregulated. The rhetoric used to day though is really a misdirection play handed down by people with ulterior motives. You know, like back in the Venetian times. I immediately think of dystopian futures when I feel I’m being manipulated. That’s what charlatans do. That’s not quite so necessary in a utopian outlook.


Landing the Eagle

     4/30/2020

In this episode we wrap up the greatest achievements and challenges in the computing era of the 1960's.  We learn about Howard Tindall, and how MIT became a dominant force in Apollo.  We also learn about Core Rope Memory and how Seamstresses helped program Apollo capsules.  This is the last episode of Season 1, it's time to go offline and research how a new era of computing arrives: Microcomputing.  


The History of Computing Ep 10: Computers and the Space Race

     3/25/2020

We go knee-deep into available computing technology in the late 1950's and what it was used for: Missles and Satellites.  We see the creation of the NASA RTCC in a muddy field and revisit what IBM is up to.


Magnetic: The History of Computing ep 8

     2/24/2020

Episode 8 covers the amazing achievements of women in early computing, including an excellent mathematician who makes sense of programming languages, and a bombshell actress who invented something that makes it possible for us to use computers wirelessly.


Magnetic: the History of Computing

     11/11/2019

In this first episode, I go over the beginning of computing: why did we start this thing in the first place?  We review the Abacus, the plague, and the loom, and see why those factored into the device you're reading this on. 


Keeping Things BASIC

     12/14/2020

BASIC is a strange language. During the early days of home computing it was everywhere you looked, pretty much every microcomputer in the 70s and early 80s ran BASIC. For a time it filled a niche almost perfectly, it was a useable language that anyone could learn. That didn't happen by accident. Today we are looking at the development of BASIC, how two mathematicians started a quest to expose more students to computers, and how their creation got away from them.


Hacker Folklore

     12/28/2020

Hacker hasn't always been used to describe dangerous computer experts will ill intent. More accurately it should be sued to describe those enamored with computers, programming, and trying to push machines to do interesting things. The values, ethics, morals, and practices around those people make up what's known as hacker culture. Today we are digging into the Jargon File, a compendium of all things hackish and hackable, to take a look at hacker culture through its folklore.
 
Huge thanks to some of my fellow podcasters for doing readings for me this episode. In order of appearance they are:
 
Randall Kindig of the FloppyDays Vintage Computing Podcast(floppydays.com)
Charles Edge from The History of Computing(thehistoryofcomputing.libsyn.com)
Sebastian Major of Our Fake History(ourfakehistory.com)
 
Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


Lars Brinkhoff Interview, Preserving ITS

     1/18/2021

Lars Brinkhoff has been spearheading the effort to keep the incompatible Timesharing System alive. Today we sit down to talk about the overall ITS restoration project, software preservation, and how emulation can help save the past.

You can find the full restoration project at github: https://github.com/PDP-10/its

And follow Lars on twitter: @larsbrinkhoff


On Chariots of the Gods?

     2/6/2021

Humanity is searching for meaning. We binge tv shows. We get lost in fiction. We make up amazing stories about super heroes. We hunt for something deeper than what’s on the surface. We seek conspiracies or... aliens.

I finally got around to reading a book that had been on my list for a long time, recently. Not because I thought I would agree with its assertions - but because it came up from time to time in my research. 

Chariots of the Gods? is a book written in 1968 by German Erich Von Daniken. He goes through a few examples to, in his mind, prove that aliens not only had been to Earth but that they destroyed Sodom with fire and brimstone which he said was a nuclear explosion. He also says the Ark of the Covenant was actually a really big walkie-talkie for calling space. 

Ultimately, the thesis centers around the idea than humans could not possibly have made the technological leaps we did and so must have been given to us from the gods. I find this to be a perfectly satisfactory science fiction plot. In fact, various alien conspiracy theories seemed to begin soon after Orson Welles 1938 live adaption of H.G. Wells’ War of the Worlds and like a virus, they mutated. But did this alien virus start in a bat in Wuhan or in Roman Syria. 

The ancient Greeks and then Romans had a lot of gods. Lucian of Samosata thought they should have a couple more. He wove together a story, which he called “A True Story.” In it, he says it’s all make-believe. Because they believed in multiple pantheons of gods in modern day Syria in the second century AD. In the satire, Lucian and crew get taken to the Moon where they get involved in a war between the Moon and the Sun kings for the rights to colonize the Morning Star. They then get eaten by a whale and escape and travel meeting great Greeks through time including Pythagoras, Homer, and Odysseus. And they find the new world. Think of how many modern plots are wrapped up in that book from the second century, made to effectively make fun of storytellers like Homer?

The 1800s was one of the first centuries where humanity began to inherit a rapid merger and explosion of scientific understanding and Edgar Allan Poe again took us to the moon in "The Unparalleled Adventure of One Hans Pfaall" in 1835. Jules Verne, Mary Shelley, and then H.G. Welles with that War of the Worlds in 1898. By then we’d mapped the surface of the moon with telescopes, so they wrote of Mars and further. H.P. Lovecraft gave us the Call of Cthulhu. These authors predicted the future - but science fiction became a genre that did more. It helped us create satire or allegory or just comparisons to these rapid global changes in ways that called out the social impact to consider before or after we invent. And to just cope with evolving social norms. The magazine Amazing Stories came in 1926 and the greatest work of science fiction premiered in 1942 with Isaac Asimov’s Foundation. Science fiction was opening our eyes to what was possible and opened the minds of scientists to study what we might create in the future. But it wasn’t real. 

Von Daniken and French author Robert Charroux seemed to influence one another in taking history and science and turning them into pseudohistory and pseudoscience. And both got many of their initial ideas from the 1960 book, The Morning of the Magicians. But Chariots of the Gods? was a massive success and a best seller. And rather than be dismissed it has now spread to include conspiracy and other theories. Which is fine as fiction, not as non-fiction. 

Let’s look at some other specific examples from Chariots of the Gods? Von Daniken claims that Japanese Dogu figures were carvings of aliens. He claims there were alien helicopter carvings in an Egyptian temple. He claims the Nazca lines in Peru were a way to call aliens and that a map from 1513 actually showed the earth from space rather than thinking it possible that cartography was capable of showing a somewhat accurate representation of the world in the Age of Discovery. He claimed stories in the Bible were often inspired by alien visits much as some First Nation peoples and cargo cults thought people in ships visiting their lands for the first time might be gods. 

The one thing I’ve learned researching these episodes is that technology has been a constant evolution. Many of our initial discoveries like fire, agriculture, and using the six simple machines could be observed in nature. From the time we learned to make fire, it was only a matter of time before humanity discovered that stones placed in or around fire might melt in certain ways - and so metallurgy was born. We went through population booms as we discovered each of these.

We used the myths and legends that became religions to hand down knowledge, as I was taught to use mnemonics to memorize the seven layers of the OSI model. That helped us preserve knowledge of astronomy across generations so we could explore further and better maintain our crops. 

The ancient Sumerians then Babylonians gave us writing. But we had been drawing on caves for thousands of years. Which seems more likely, that we were gifted this advance or that as we began to settle in more dense urban centers that we out of a need to scale operations tracked the number of widgets we had with markings that, over time evolved into a written language? First through pictures and then through words that evolved into sentences and then epics? We could pass down information more reliably across generation. 

Trade and commerce and then ziggurats and pyramids help hone our understanding of mathematics. The study of logic and automata allowed us to build bigger and faster and process more raw materials. Knowledge of all of these discoveries spread across trade routes. 

So ask yourself this. Which is more likely, the idea that humans maintained a constant, ever-evolving stream of learned ingenuity that was passed down for tens of thousands of years until it accelerated when we learned to write, or do you think aliens from outer space instead gave us technology? 

I find it revokes our very agency to assert anything but the idea that humans are capable of the fantastic feats we have reached and believe it insulting to take away from the great philosophers, discoverers, scientists, and thinkers that got us where we are today. 

Our species has long made up stories to explain that which the science of the day cannot. Before we understand the why, we make up stories about the how. This allowed us to pass knowledge down between generations. We see this in ancient explanations of the movements of stars before we had astrolabes. We see humans want to leave something behind that helps the next generations, or burial sites like with Stonehenge - not summon Thor from an alien planet as Marvel has rewritten their own epics to indicate. In part based on rethinking these mythos in the context of Chariots of the Gods?

Ultimately the greater our gaps in understanding, the more disconnected with ourselves I find that most people are. We listen to talking heads rather than think for ourselves. We get lost in theories of cabals. We seek a deeper, missing knowledge because we can’t understand everything in front of us. 

Today, if we know where to look, and can decipher the scientific jargon, all the known knowledge of science and history are at our fingertips. But it can take a lifetime to master one of thousands of fields of scientific research. If we don’t have that specialty then we can perceive it as unreachable and think maybe this pseudohistorical account of humanity is true and maybe aliens gave us 

If we feel left behind then it becomes easier to blame others when we can’t get below the surface of complicated concepts. Getting left behind might mean that jobs don’t pay what they paid our parents. We may perceive others as getting attention or resources we feel we deserve. We may feel isolated and alone. And all of those are valid feelings. When they’re heard then maybe we can look to the future instead of accepting pseudoscience and pseudohistory and conspiracies. Because while they make for fun romps on the big screen, they’re dangerous when taken as fact.


From Moveable Type To The Keyboard

     2/18/2021

QWERTY. It’s a funny word. Or not a word. But also not an acronym per se. Those are the  top six letters in a modern keyboard. Why? Because the frequency they’re used allows for hammers on a traditional typewriter to travel to and fro and the effort allows us to be more efficient with our time while typing. The concept of the keyboard goes back almost as far back as moveable type - but it took hundreds of years to standardize where we are today. 

Johannes Gutenberg is credited for developing the printing press in the 1450s. Printing using wooden blocks was brought to the Western world from China, which led him to replace the wood or clay characters with metal, thus giving us what we now think of as Moveable Type. This meant we were now arranging blocks of characters to print words onto paper. From there it was only a matter of time that we would realize that pressing a key could stamp a character onto paper as we went rather than developing a full page and then pressing ink to paper.

The first to get credit for pressing letters onto paper using a machine was Venetian Francesco Rampazzetto in 1575. But as with many innovations, this one needed to bounce around in the heads of inventors until the appropriate level of miniaturization and precision was ready. Henry Mill filed an English patent in 1714 for a machine that could type (or impress) letters progressively. By then, printed books were ubiquitous but we weren’t generating pages of printed text on the fly just yet. 

Others would develop similar devices but from 1801 to 1810, Pellegrino Turri in Italy developed carbon paper. Here, he coated one side of paper with carbon and the other side with wax. Why did he invent that, other than to give us an excuse to say carbon copy later (and thus the cc in an email)? 

Either he or Agostino Fantoni da Fivizzano invented a mechanical machine for pressing characters to paper for Countess Carolina Fantoni da Fivizzano, a blind friend of his. She would go on to send him letters written on the device, some of which exist to this day. More inventors tinkered with the idea of mechanical writing devices, often working in isolation from one another.

One was a surveyor, William Austin Burt. He found the handwritten documents of his field laborious and so gave us the typographer in 1829. Each letter was moved to where needed to print manually so it wasn’t all that much faster than the handwritten document, but the name would be hyphenated later to form type-writer. And with precision increasing and a lot of invention going on at the time there were other devices. But his patent was signed by Andrew Jackson. 

James Pratt introduced his Pterotype in an article in the Scientific American in 1867. It was a device that more closely resembles the keyboard layout we know today, with 4 rows of keys and a split in the middle for hands. Others saw the article and continued their own innovative additions. 

Frank Hall had worked on the telegraph before the Civil War and used his knowledge there to develop a Braille writer, which functioned similarly to a keyboard. He would move to Wisconsin, where he came in contact with another team developing a keyboard.

Christopher Latham Sholes saw the article in the Scientific American and along with Carlos Glidden and Samuel Soule out of Milwaukee developed the QWERTY keyboard we know of as the standard keyboard layout today from 1867 to 1868. Around the same time, Danish pastor Rasmus Malling-Hansen introduced the writing ball in 1870. It could also type letters onto paper but with a much more complicated keyboard layout. It was actually the first typewriter to go into mass production - but at this point new inventions were starting to follow the QWERTY layout. Because asdfjkl;. Both though were looking to increase typing speed with Malling-Mansen’s layout putting constanents on the right side and vowels on the left - but Sholes and Glidden mixed keys up to help reduce the strain on hardware as it recoiled, thus splitting common characters in words between the sides. 

James Densmore encountered the Sholes work and jumped in to help. They had it relentlessly tested and iterated on the design, getting more and more productivity gains and making the device more and more hardy. When the others left the project, it was Densmore and Sholes carrying on. But Sholes was also a politician and editor of a newspaper, so had a lot going on. He sold his share of the patent for their layout for $12,000 and Densmore decided to go with royalties instead. 

By the 1880s, the invention had been floating around long enough and given a standardized keyboard it was finally ready to be mass produced. This began with the Sholes & Glidden Type Writer introduced in America in 1874. That was followed by the Caligraph. But it was Remington that would take the Sholes patent and create the Remington Typewriter, removing the hyphen from the word typewriter and going mainstream - netting Densmore a million and a half bucks in 1800s money for his royalties. And if you’ve seen anything typed on it, you’ll note that it supported one font: the monospaced sans serif Grotesque style.

Characters had always been upper case. Remington added a shift key to give us the ability to do both upper and lower case in 1878 with the Remington Model 2. This was also where we got the ampersand, parenthesis,  percent symbol, and question mark as shift characters for numbers. Remington also added tab and margins in 1897. Mark Twain was the first author to turn a manuscript in from a typewriter using what else but the Remington Typewriter. By then, we were experimenting with the sizes and spaces between characters, or kerning, to make typed content easier to read. Some companies moved to slab serif or Pica fonts and typefaces. You could really tell a lot about a company by that Olivetti with it’s modern, almost anti-Latin fonts. 

The Remington Typewriter Company would later merge with the Rand Kardex company to form Remington Rand, making typewriters, guns, and then in 1950, acquiring the Eckert-Mauchly Computer Corporation, who made ENIAC - arguably the first all-digital computer. Rand also acquired Engineering Research Associates (or ERA) and introduced the Univac. Electronics maker Sperry acquired them in 1955, and then merged with Burroughs to form Unisys in 1988, still a thriving company. But what’s important is that they knew typewriters. And keyboards.

But electronics had been improving in the same era that Remington took their typewriters mainstream, and before. Samuel Morse developed the recording telegraph in 1835 and David Hughes added the printed telegraph. Emile Baudot gave us a 5 bit code in the 1870s that enhanced that but those were still using keys similar to what you find on a piano. The typewriter hadn’t yet merged with digital communications just yet. Thomas Edison patented the electric typewriter in 1872 but didn’t produce a working model. And this was a great time of innovation. For example, Alexander Graham Bell was hard at work on patenting the telephone at the time. 

James Smathers then gave us the first electronic typewriter in 1920 and by the 1930s improved Baudot, or baud was combined with a QUERTY keyboard by Siemens and others to give us typing over the wire. The Teletype Corporation was founded in 1906 and would go from tape punch and readers to producing the teletypes that allowed users to dial into mainframes in the 1970s timesharing networks. But we’re getting ahead of ourselves. How did we eventually end up plugging a keyboard into a computer?

Herman Hollerith, the mind behind the original IBM punch cards for tabulating machines before his company got merged with others to form IBM, brought us text keypunches, which were later used to input data into early computers. The Binac computer used a similar representation with 8 keys and an electromechanical control was added to input data into the computer like a punch card might - for this think of a modern 10-key pad. Given that we had electronic typewriters for a couple of decades it was only a matter of time before a full keyboard worth of text was needed on a computer. That came in 1954 with the pioneering work done MIT. Here, Douglas Ross wanted to hookup a Flexowriter electric typewriter to a computer, which would be done the next year as yet another of the huge innovations coming out of the Whirlwind project at MIT. With the addition of core memory to computing that was the first time a real keyboard (and being able to write characters into a computer) was really useful. After nearly 400 years since the first attempts to build a moveable type machine and then and just shy of 100 years since the layout had been codified, the computer keyboard was born. 

The PLATO team in late 60s University of Illinois Champaign Urbana were one of many research teams that sought to develop cheaper input output mechanisms for their computer Illiac and prior to moving to standard keyboards they built custom devices with fewer keys to help students select multiple choice answers. But eventually they used teletype-esque systems. 

Those early keyboards were mechanical. They still made a heavy clanky sound when the keys were pressed. Not as much as when using a big mechanical typewriter, but not like the keyboards we use today. These used keys with springs inside them. Springs would be replaced with pressure pads in some machines, including the Sinclair ZX80 and ZX81. And the Timex Sinclair 1000. Given that there were less moving parts, they were cheap to make. They used conductive traces with a gate between two membranes. When a key was pressed electricity flowed through what amounted to a flip-flop. When the key was released the electricity stopped flowing. I never liked them because they just didn’t have that feel. In fact, they’re still used in devices like microwaves to provide for buttons under led lights that you can press. 

By the late 1970s, keyboards were becoming more and more common. The next advancement was in Chiclet keyboards, common on the TRS-80 and the IBM PCjr. These were were like membrane keyboards but used moulded rubber. Scissor switch keyboards became the standard for laptops - these involve a couple of pieces of plastic under each key, arranged like a scissor. And more and more keyboards were produced. 

With an explosion in the amount of time we spent on computers, we eventually got about as many designs of ergonomic keyboards as you can think of. Here, doctors or engineers or just random people would attempt to raise or lower hands or move hands apart or depress various keys or raise them. But as we moved from desktops to laptops or typing directly on screens as we do with tablets and phones, those sell less and less.

I wonder what Sholes would say if you showed him and the inventors he worked with what the QWERTY keyboard looks like on an iPhone today? I wonder how many people know that at least two of the steps in the story of the keyboard had to do with helping the blind communicate through the written word? I wonder how many know about the work Alexander Graham Bell did with the deaf and the impact that had on his understanding of the vibrations of sound and the emergence of phonautograph to record sound and how that would become acoustic telegraphy and then the telephone, which could later stream baud? Well, we’re out of time for today so that story will have to get tabled for a future episode.

In the meantime, look around for places where there’s no standard. Just like the keyboard layout took different inventors and iterations to find the right amount of productivity, any place where there’s not yet a standard just needs that same level of deep thinking and sometimes generations to get it perfected. But we still use the QWERTY layout today and so sometimes once we find the right mix, we’ve set in motion an innovative that can become a true game changer. And if it’s not ready, at least we’ve contributed to the evolutions that revolutionize the world. Even if we don’t use those inventions. Bell famously never had a phone installed in his office. Because distractions. Luckily I disabled notifications on my phone before recording this or it would never get out… 


8086: The Unexpected Future

     2/22/2021

The Intel 8086 may be the most important processor ever made. It's descendants are central to modern computing, while retaining an absurd level of backwards compatibility. For such an important chip it had an unexpected beginning. The 8086 was meant as a stopgap measure while Intel worked on bigger and better projects. This episode we are looking at how Intel was trying to modernize, how the 8086 fit into that larger plan, and it's pre-IBM life.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


PayPal Was Just The Beginning

     3/6/2021

We can look around at distributed banking, crypto-currencies, Special Purpose Acquisition Companies, and so many other innovative business strategies as new and exciting and innovative. And they are. But paving the way for them was simplifying online payments to what I’ve heard Elon Musk call just some rows in a database. 

Peter Thiel, Max Levchin, and former Netscaper Luke Nosek had this idea in 1998. Levchin and Nosek has worked together on a startup called SponsorNet New Media while at the University of Illinois Champagne-Urbana where PLATO and Mosaic had come out of. And SponsorNet was supposed to sell online banner ads but would instead be one of four failed startups before zeroing in on this new thing, where they would enable digital payments for businesses and make it simple for consumers to buy things online. They called the company Confinity and setup shop in beautiful Mountain View, California.

It was an era when a number of organizations were doing things in taking payments online that weren’t so great. Companies would cache credit card numbers on sites, many had weak security, and the rush to sell everything  in the bubble forming around dot-coms fueled a knack for speed over security, privacy, or even reliability. 

Confinity would store the private information in its own banking vaults, keep it secure, and provide access to vendors - taking a small charge per-transaction. Where large companies had been able to build systems to take online payments, now small businesses and emerging online stores could compete with the big boys. Thiel and Levchin had hit on something when they launched a service called PayPal, to provide a digital wallet and enable online transactions. They even accepted venture funding, taking $3 million from banks like Deutsche Bank over Palm Pilots. One of those funders was Nokia, investing in PayPal expanding into digital services for the growing mobile commerce market. And by 2000 they were up to 1,000,000 users. 

They saw an opening to make a purchase from a browser on a phone or a browser or app on a cell phone using one of those new smart phone ideas. And they were all rewarded with over 10 million people using the site in just three short years, processing a whopping $3 billion in transactions. 

Now this was the heart of the dot-com bubble. In that time, Elon Musk managed to sell his early startup Zip2, which made city guides on the early internet, to Compaq for around $300 million, pocketing $22 million for himself. He parlayed that payday into X.com, another online payment company. X.com exploded to over 200,000 customers quickly and as happens frequently with rapid acceleration, a young Musk found himself with a new boss - Bill Harris, the former CEO of Intuit. 

And they helped invent many of the ways we do business online at that time. One of my favorite of Levchin’s contributions to computing, the Gausebeck-Levchin test, is one of the earliest implementations of what we now call CAPTCHA - you know when you’re shown a series of letters and asked to type them in to eliminate bots. 

Harris helped the investors de-risk by merging with Confinity to form X.com. Peter Thiel and Elon Musk are larger than life minds in Silicon Valley. The two were substantially different. Musk took on the CEO role but Musk and Thiel were at heads. Thiel believed in a Linux ecosystem and Musk believed in a Windows ecosystem. Thiel wanted to focus on money transfers, similar to the PayPal of today. Given that those were just rows in a database, it was natural that that kind of business would become a red ocean and indeed today there are dozens of organizations focused on it. But Paypal remains the largest. So Musk also wanted to become a full online banking system - much more ambitious. Ultimately Thiel won and assumed the title of CEO. 

They remained a money transmitter and not a full bank. This means they keep funds that have been sent and not picked up, in an interest bearing account at a bank. 

They renamed the company to PayPal in 2001 and focused on taking the company public, with an IPO as PYPL in 2002. The stock shot up 50% in the first day of trading, closing at $20 per share. Yet another example of the survivors of the dot com bubble increasing the magnitude of valuations. By then, most eBay transactions accepted PayPal and seeing an opportunity, eBay acquired PayPal for $1.5 billion later in 2002. Suddenly PayPal was the default option for closed auctions and would continue their meteoric rise. Musk is widely reported to have made almost $200 million when eBay bought PayPal and Thiel is reported to have made over $50 million. 

Under eBay, PayPal would grow and as with most companies that IPO, see a red ocean form in their space. But they brought in people like Ken Howery, who serve as the VP of corporate development, would later cofound investment firm Founders Fund with Thiel, and then become the US Ambassador to Sweden under Trump. And he’s the first of what’s called the PayPal Mafia, a couple dozen extremely influential personalities in tech. 

By 2003, PayPal had become the largest payment processor for gambling websites. Yet they walked away from that business to avoid some of the complicated regulations until various countries that could verify a license for online gambling venues. 

In 2006 they added security keys and moved to sending codes to phones for a second factor of security validation. In 2008 they bought Fraud Sciences to gain access to better online risk management tools and Bill Me Later.

As the company grew, they setup a company in the UK and began doing business internationally. They moved their EU presence to Luxembourg 2007. They’ve often found themselves embroiled in politics, blocking the any political financing accounts, Alex Jones show InfoWars, and one of the more challenging for them, WikiLeaks in 2010. This led to them being attacked by members of Anonymous for a series of denial of service attacks that brought the PayPal site down.

OK, so that early CAPTCHA was just one way PayPal was keeping us secure. It turns out that moving money is complicated, even the $3 you paid for that special Golden Girls t-shirt you bought for a steal on eBay. For example, US States require reporting certain transactions, some countries require actual government approval to move money internationally, some require a data center in the country, like Turkey. So on a case-by-case basis PayPal has had to decide if it’s worth it to increase the complexity of the code and spend precious development cycles to support a given country. In some cases, they can step in and, for example, connect the Baidu wallet to PayPal merchants in support of connecting China to PayPal. 

They were spun back out of eBay in 2014 and acquired Xoom for $1 billion in 2015, iZettle for $2.2 billion, who also does point of sales systems. And surprisingly they bought online coupon aggregator Honey for $4B in 2019. But their best acquisition to many would be tiny app payment processor Venmo for $26 million. I say this because a friend claimed they prefer that to PayPal because they like the “little guy.”

Out of nowhere, just a little more than 20 years ago, the founders of PayPal and they and a number of their initial employees willed a now Fortune 500 company into existence. While they were growing, they had to learn about and understand so many capital markets and regulations. This sometimes showed them how they could better invest money. And many of those early employees went on to have substantial impacts in technology. That brain drain helped fuel the Web 2.0 companies that rose. 

One of the most substantial ways was with the investment activities. Thiel would go on to put $10 million of his money into Clarium Capital Management, a hedge fund, and Palantir, a big data AI company with a focus on the intelligence industry, who now has a $45 billion market cap. And he funded another organization who doesn’t at all use our big private data for anything, called Facebook. He put half a million into Facebook as an angel investor - an investment that has paid back billions. He’s also launched the Founders Fund, Valar Venture, and is a partner at Y Combinator, in capacities where he’s funded everyone from LinkedIn and Airbnb to Stripe to Yelp to Spotify, to SpaceX to Asana and the list goes on and on and on. 

Musk has helped take so many industries online. Why not just apply that startup modality to space - so launched SpaceX and to cars, so helped launch (and backed financially) Tesla and solar power, so launched Solar City and building tunnels so launched The Boring Company. He dabbles in Hyperloops (thus the need for tunnels) and OpenAI and well, whatever he wants. He’s even done cameos in movies like Iron Man. He’s certainly a personality. 

Max Levchin would remain the CTO and then co-found and become the CEO of Affirm, a public fintech company. 

David Sacks was the COO at PayPal and founded Yammer. Roelof Botha is the former CFO at PayPal who became a partner at Sequoia Capital, one of the top venture capital firms. Yishan Wong was an engineering manager at PayPal who became the CEO of Reddit.

Steve Chen left to join Facebook but hooked back up with Jawed Karim for a new project, who he studied computer science at the University of Illinois at Champaign-Urbana with. They were joined by Chad Hurley, who had created the original PayPal logo, to found YouTube. They sold it to Google for $1.65 billion in 2006. Hurley now owns part of the Golden State Warriors, the MLS Los Angeles team, and Leeds United.

Reid Hoffman was another COO at PayPal, who Thiel termed the “firefighter-in-chief” and left to found LinkedIn. After selling LinkedIn to Microsoft for over $26 billion he become a partner at venture capital firm, Greylock Partners. 

Jeremy Stoppelman and Russel Simmons co-founded Yelp with $1 million in funding from Max Levchin, taking the company public in 2011. And the list goes on.

PayPal paved the way for small transactions on the Internet. A playbook repeated in different parts of the sector by the likes of Square, Stripe, Dwolla, Due, and many others - including Apple Pay, Amazon Payments, and Google Wallet. We live in an era now, where practically every industry has been taken online. Heck, even cars. In the next episode we’ll look at just that, exploring the next steps in Elon Musk’s career after leaving PayPal. 


The IBM PC

     3/8/2021

Released in August 1981, the IBM PC is perhaps one of the most important computers in history. It originated the basic architecture computers still use today, it flung the doors open to a thriving clone market, and created an ad-hoc set of standards. The heart of the operation, Intel's 8088, solidified the x86 architecture as the computing platform of the future. IBM accomplished this runaway success by breaking all their own rules, heavily leveraging 3rd party hardware and software, and by cutting as many corners as possible. The PC was designed in less than a year, so how did it become the most enduring design in the industry?
 
Some ad clips this episode were from this fabulous PC ad compilation: https://www.youtube.com/watch?v=kQT_YCBb9ao
 
Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


THE SOURCE

     3/21/2021

One of the great things about the modern Internet is the wide range of services and content available on it. You have news, email, games, even podcasts. And in each category you have a wide range of choices. This wide diversity makes the Internet so compelling and fun to explore. But what happens when you take away that freedom of choice? What would a network look like if there was only one news site, or one place to get eamil? Look no further than THE SOURCE. Formed in 1979 and marketed as the information utility for the information age, THE SOURCE looked remarkably like the Internet in a more closed-off format. The key word here is: looked.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


C Level, Part I

     4/4/2021

C is easily one of the most influential programming languages in the world, and it's also one of the most popular languages in the world. Even after close to 50 years it remains in widespread and sustained use. In this series we are going to look at how C was developed, how it spread, and why it remains so relevant. To do that we need to start with background, and look at what exactly influenced C. This episode we are diving into some more ALGOL, CPL, BCPL, and eventually B.

Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing


An Abridged History Of Instagram

     4/24/2021

This was a hard episode to do. Because telling the story of Instagram is different than explaining the meaning behind it. You see, on the face of it - Instagram is an app to share photos. But underneath that it’s much more. It’s a window into the soul of the Internet-powered culture of the world. Middle schoolers have always been stressed about what their friends think. It’s amplified on Instagram. People have always been obsessed with and copied celebrities - going back to the ages of kings. That too is on Instagram. We love dogs and cute little weird animals. So does Instagram. 

Before Instagram, we had photo sharing apps. Like Hipstamatic. Before Instagram, we had social networks - like Twitter and Facebook. How could Instagram do something different and yet, so similar? How could it offer that window into the world when the lens photos are snapped with are as though through rose colored glasses? Do they show us reality or what we want reality to be? Could it be that the food we throw away or the clothes we donate tell us more about us as humans than what we eat or keep? Is the illusion worth billions of dollars a year in advertising revenue while the reality represents our repressed shame?

Think about that as we go through this story.

If you build it, they will come. Everyone who builds an app just kinda’ automatically assumes that throngs of people will flock to the App Store, download the app, and they will be loved and adored and maybe even become rich. OK, not everyone thinks such things - and with the number of apps on the stores these days, the chances are probably getting closer to those that a high school quarterback will play in the NFL. But in todays story, that is exactly what happened. 

And Kevin Systrom had already seen it happen. He was offered a job as one of the first employees at Facebook while still going to Stanford. That’ll never be a thing. Then while on an internship he was asked to be one of the first Twitter employees. That’ll never be a thing either. But they were things, obviously!

So in 2010, Systrom started working on an app he called Burbn and within two years sold the company, then called Instagram for one billion dollars. In doing so he and his co-founder Mike Krieger helped forever changing the deal landscape for mergers and acquisitions of apps, and more profoundly giving humanity lenses with which to see a world we want to see - if not reality.

Systrom didn’t have a degree in computer science. In fact, he taught himself to code after working hours, then during working hours, and by osmosis through working with some well-known founders. 

Burbn was an app to check in and post plans and photos. It was written in HTML5 and in a Cinderella story, he was able to raise half a million dollars in funding from Baseline Ventures and Andreesen Horowitz, bringing in Mike Krieger as a co-founder. 

At the time, Hipstamatic was the top photo manipulation and filtering app. Given that the iPhone came with a camera on-par (if not better) than most digital point and shoots at the time, the pair re-evaluated the concept and instead leaned further into photo sharing, while still maintaining the location tagging.

The original idea was to swipe right and left, as we do in apps like Tinder. But instead they chose to show photos in chronological order and used a now iconic 1:1 aspect ratio, or the photos were square, so there was room on the screen to show metadata and a taste of the next photo - to keep us streaming. The camera was simple, like the Holga camera Systrom had been given while stying abroad when at Stanford. That camera made pictures a little blurry and in an almost filtered way made them loo almost artistic. 

After System graduated from Stanford in 2006, he worked at Google, then NextStop, and then got the bug to make his own app. And boy did he. One thing though, even his wife Nicole didn’t think she could take good photos having seen those from a friend of Systrom’s. He said the photos were so good because the filters. And so we got the first filter, X-Pro 2, so she could take great photos on the iPhone 3G. 

Krieger shared the first post on Instagram on July 16, 2010 and Systrom followed up within a few hours with a picture of a dog. The first of probably a billion dog photos (including a few of my own). And they officially published Instagram on the App Store in October of 2010.

After adding more and more filters, Systrom and Krieger closed in on one of the greatest growth hacks of any app: they integrated with Facebook, Twitter, and Foursquare so you could take the photo in Instagram and shoot it out to one of those apps - or all three.

At the time Facebook was more of a browser tool. Few people used the mobile app. And for those that did try and post photos on Facebook, doing so was laborious, using a mobile camera roll in the app and taking more steps than needed. Instagram became the perfect glue to stitch other apps together. And rather than always needing to come up with something witty to say like on Twitter, we could just point the camera on our phone at something and hit a button. 

The posts had links back to the photo on Instagram. They hit 100,000 users in the first week and a million users by the end of the year. Their next growth hack was to borrow the hashtag concept from Twitter and other apps, which they added in January of 2011.

Remember how Systrom interned at Odeo and turned down the offer to go straight to Twitter after college? Twitter didn’t have photo sharing at the time, but Twitter co-founder Jack Dorsey had showed System plenty of programming techniques and the two stayed in touch. He became an angel investor in a $7 million Series A and the first real influencer on the platform, sending that link to every photo to all of his Twitter followers every time he posted. The growth continued. June, 2011 they hit 5 million users, and doubled to 10 million by September of 2011. I was one of those users, posting the first photo to @krypted in the fall - being a nerd it was of the iOS 5.0.1 update screen and according to the lone comment on the photo my buddy @acidprime apparently took the same photo. 

They spent the next few months just trying to keep the servers up and running and released an Android of the app in April of 2012, just a couple of days before taking on $50 million dollars in venture capital. But that didn’t need to last long - they sold the company to Facebook for a billion dollars a few days later, effectively doubling each investor in that last round of funding and shooting up to 50 million users by the end of the month. 

At 13 employees, that’s nearly $77 million dollars per employee. Granted, much of that went to Systrom and the investors. The Facebook acquisition seemed great at first. Instagram got access to bigger resources than even a few more rounds of funding would have provided. 

Facebook helped them scale up to 100 million users within a year and following Facebook TV, and the brief but impactful release of Vine at Twitter, Instagram added video sharing, photo tagging, and the ability to add links in 2013.  Looking at a history of their feature releases, they’re slow and steady and probably the most user-centered releases I’ve seen. And in 2013, they grew to 150 million users, proving the types of rewards that come from doing so. 

With that kind of growth it might seem that it can’t last forever - and yet on the back of new editing tools, a growing team, and advertising tools, they managed to hit a staggering 300 million users in 2014.

While they released thoughtful, direct, human sold advertising before, they opened up the ability to buy ads to all advertisers, piggy backing on the Facebook ad selling platform in 2015. That’s the same year they introduced Boomerang, which looped photos in forward and reverse. It was cute for a hot minute. 

2016 saw the introduction of analytics that included demographics, impressions, likes, reach, and other tools for businesses to track performance not only of ads, but of posts. As with many tools, it was built for the famous influencers that had the ear of the founders and management team - and made available to anyone. They also introduced Instagram Stories, which was a huge development effort and they owned that they copied it from Snapchat - a surprising and truly authentic move for a Silicon Valley startup. And we could barely call them a startup any longer, shooting over half a billion users by the middle of the year and 600 million by the end of the year. 

That year, they also brought us live video, a Windows client, and one of my favorite aspects with a lot of people posting in different languages, they could automatically translate posts. 

But something else happened in 2016. Donald Trump was elected to the White House. This is not a podcast about politics but it’s safe to say that it was one of the most divisive elections in recent US history. And one of the first where social media is reported to have potentially changed the outcome. Disinformation campaigns from foreign actors combined with data illegally obtained via Cambridge Analytica on the Facebook network, combined with increasingly insular personal networks and machine learning-driven doubling down on only seeing things that appealed to our world view led to many being able to point at networks like Facebook and Twitter as having been party to whatever they thought the “other side” in an election had done wrong. 

Yet Instagram was just a photo sharing site. They put the users at the center of their decisions. They promoted the good things in life. While Zuckerberg claimed that Facebook couldn’t have helped change any outcomes and that Facebook was just an innocent platform that amplified human thoughts - Systrom openly backed Hillary Clinton. And yet, even with disinformation spreading on Instagram, they seemed immune from accusations and having to go to Capital Hill to be grilled following the election. Being good to users apparently has its benefits. 

However, some regulation needed to happen. 2017, the Federal Trade Commission steps in to force influencers to be transparent about their relationship with advertisers - Instagram responded by giving us the ability to mark a post as sponsored. Still, Instagram revenue spiked over 3 and a half billion dollars in 2017.

Instagram revenue grew past 6 billion dollars in 2018. Systrom and Krieger stepped away from Instagram that year. It was now on autopilot.  Although I think all chief executives have a 

Instagram revenue shot over 9 billion dollars in 2019. In those years they released IGTV and tried to get more resources from Facebook, contributing far more to the bottom line than they took. 

2020 saw Instagram ad revenue close in on 13.86 billion dollars with projected 2021 revenues growing past 18 billion.

In The Picture of Dorian Gray from 1890, Lord Henry describes the impact of influence as destroying our genuine and true identity, taking away our authentic motivations, and as Shakespeare would have put it - making us servile to the influencer. Some are famous and so become influencers on the product naturally, like musicians, politicians, athletes, and even the Pope. . Others become famous due to getting showcased by the @instagram feed or some other prominent person. These influencers often stage a beautiful life and to be honest, sometimes we just need that as a little mind candy. But other times it can become too much, forcing us to constantly compare our skin to doctored skin, our lifestyle to those who staged their own, and our number of friends to those who might just have bought theirs. And seeing this obvious manipulation gives some of us even more independence than we might have felt before. We have a choice: to be or not to be. 

The Instagram story is one with depth. Those influencers are one of the more visible aspects, going back to the first that posted sponsored photos from Snoop Dogg. And when Mark Zuckerberg decided to buy the company for a billion dollars, many thought he was crazy. But once they turned on the ad revenue machine, which he insisted Systrom wait on until the company had enough users, it was easy to go from 3 to 6 to 9 to over 13 and now likely over 18 billion dollars. That’s a greater than 30:1 return on investment, helping to prove that such lofty acquisitions aren’t crazy. 

It’s also a story of monopoly, or at least of suspected monopolies. Twitter tried to buy Instagram and Systrom claims to have never seen a term sheet with a legitimate offer. Then Facebook swooped in and helped fast-track regulatory approval of the acquisition. With the acquisition of WhatsApp, Facebook owns four of the top 6 social media sites, with Facebook, WhatsApp, Facebook Messenger, and Instagram all over a billion users and YouTube arguably being more of a video site than a true social network. And they tried to buy Snapchat - only the 17th ranked network. 

More than 50 billion photos have been shared through Instagram. That’s about a thousand a second. Many are beautiful...


(OldComputerPods) ©Sean Haas, 2020