Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate the future! Today we’re going to look at one of the more underwhelming operating systems released: Windows 1.0. Doug Englebart released the NLS, or oN-Line System in 1968. It was expensive to build, practically impossible to replicate, and was only made possible by NASA and ARPA grants. But it introduced the world to the computer science research community to what would be modern video monitors, windowing systems, hypertext, and the mouse. Modern iterations of these are still with us today, as is a much more matured desktop metaphor. Some of his research team ended up at Xerox PARC and the Xerox Alto was released in 1973, building on many of the concepts and continuing to improve upon them. They sold about 2,000 Altos for around $32,000. As the components came down in price, Xerox tried to go a bit more mass market with the Xerox Star in 1981. They sold about 25,000 for about half the price. The windowing graphics got better, the number of users were growing, the number of developers were growing, and new options for components were showing up all over the place. Given that Xerox was a printing company, the desktop metaphor continued to evolve. Apple released the Lisa in 1983. They sold 10,000 for about $10,000. Again, the windowing system and desktop metaphor continued on and Apple quickly released the iconic Mac shortly thereafter, introducing much better windowing and a fully matured desktop metaphor, becoming the first computer considered mass market that was shipped with a graphical user interface. It was revolutionary and they sold 280,000 in the first year. The proliferation of computers in our daily lives and the impact on the economy was ready for the j-curve. And while IBM had shown up to compete in the PC market, they had just been leapfrogged by Apple. Jobs would be forced out of Apple the following year, though. By 1985, Microsoft had been making software for a long time. They had started out with BASIC for the Altair and had diversified, bringing BASIC to the Mac and releasing a DOS that could run on a number of platforms. And like many of those early software companies, it could have ended there. In a masterful stroke of business, Bill Gates ended up with their software on the IBM PCs that Apple had just basically made antiques - and they’d made plenty of cash off of doing so. But then Gates sees Visi On at COMDEX and it’s not surprise that the Microsoft version of a graphical user interface would look a bit like Visi, a bit like what Microsoft had seen from Xerox PARC on a visit in 1983, and of course, with elements that were brought in from the excellent work the original Mac team had made. And of course, not to take anything away from early Microsoft developers, they added many of their own innovations as well. Ultimately though, it was a 16-bit shell that allowed for multi-tasking and sat on top of the Microsoft DOS. Something that would continue on until the NT lineage of operating systems fully supplanted the original Windows line, which ended with Millineum Edition. Windows 1.0 was definitely a first try. IBM TopView had shipped that year as well. I’ve always considered it more of a windowing system, but it allowed multitasking and was object-oriented. It really looked more like a DOS menu system. But the Graphics Environment Manager or GEM had direct connections to Xerox PARC through Lee Lorenzen. It’s hard to imagine but at the time CP/M had been the dominant operating system and so GEM could sit on top of it or MS-DOS and was mostly found on Atari computers. That first public release was actually 1.01 and 1.02 would come 6 months later, adding internationalization with 1.03 continuing that trend. 1.04 would come in 1987 adding support for Via graphics and a PS/2 mouse. Windows 1 came with many of the same programs other vendors supplied, including a calculator, a clipboard viewer, a calendar, a pad for writing that still exists called Notepad, a painting tool, and a game that went by its original name of Reversi, but which we now call Othello. One important concept is that Windows was object-oriented. As with any large software project, it wouldn’t have been able to last as long as it did if it hadn’t of been. One simplistic explanation for this paradigm is that it had an API and there was a front-end that talked to the kernel through those APIs. Microsoft hadn’t been first to the party and when they got to the party they certainly weren’t the prettiest. But because the Mac OS wasn’t just a front-end that made calls to the back-end, Apple would be slow to add multi-tasking support, which came in their OS 5, in 1987. And they would be slow to adopt new technology thereafter, having to bring Steve Jobs back to Apple because they had no operating system of the future, after failed projects to build one. Windows 1.0 had executable files (or exe files) that could only be run in the Windowing system. It had virtual memory. It had device drivers so developers could write and compile binary programs that could communicate with the OS APIs, including with device drivers. One big difference - Bill Atkinson and Andy Hertzfeld spent a lot of time on frame buffers and moving pixels so they could have overlapping windows. The way Windows handled how a window appeared were in .ini (pronounced like any) files and that kind of thing couldn’t be done in a window manager without clipping, or leaving artifacts behind. And so it was that, by the time I was in college, I was taught by a professor that Microsoft had stolen the GUI concept from Apple. But it was an evolution. Sure, Apple took it to the masses but before that, Xerox had borrowed parts from NLS and NLS had borrowed pointing devices from Whirlwind. And between Xerox and Microsoft, there had been IBM and GEM. Each evolved and added their own innovations. In fact, many of the actual developers hopped from company to company, spreading ideas and philosophies as they went. But Windows had shipped. And when Jobs called Bill Gates down to Cupertino, shouting that Gates had ripped off Apple, Gates responded with one of my favorite quotes in the history of computing: "I think it's more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it." The thing I’ve always thought was missing from that Bill Gates quote is that Xerox had a rich neighbor they stole the TV from first, called ARPA. And the US Government was cool with it - one of the main drivers of decades of crazy levels of prosperity filling their coffers with tax revenues. And so, the next version of Windows, Windows 2.0 would come in 1987. But Windows 1.0 would be supported by Microsoft for 16 years. No other operating system has been officially supported for so long. And by 1988 it was clear that Microsoft was going to win this fight. Apple filed a lawsuit claiming that Microsoft had borrowed a bit too much of their GUI. Apple had licensed some of the GUI elements to Microsoft and Apple identified over 200 things, some big, like title bars, that made up a copyrightable work. That desktop metaphor that Susan Kare and others on the original Mac team had painstakingly developed. Well, turns out that they live on in every OS because Judge Vaughn Walker on the Ninth Circuit threw out the lawsuit. And Microsoft would end up releasing Windows 3 in 1990, shipping on practically every PC built since. And so I’ll leave this story here. But we’ll do a dedicated episode for Windows 3 because it was that important. Thank you to all of the innovators who brought these tools to market and ultimately made our lives better. Each left their mark with increasingly small and useful enhancements to the original. We owe them so much no matter the platform we prefer. And thank you, listeners, for tuning in for this episode of the History of Computing Podcast. We are so lucky to have you.
Whirlwind Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at a computer built at the tail end of World War II called Whirlwind. What makes Whirlwind so special? It took us from an era of punch card batch processed computing and into the era of interactive computing. Sometimes the names we end up using for things evolve over time. Your memories are a bit different than computer memory. Computer memory is information that is ready to be processed. Long term memory, well, we typically refer to that as storage. That’s where you put your files. Classes you build in Swift are loaded into memory at runtime. But that memory is volatile and we call it random-access memory now. This computer memory first evolved out of MIT with Whirlwind. And so they came up with what we now call magnetic-core memory in 1955. Why did they need speeds faster than a vacuum tube? Well, it turns out vacuum tubes burn out a lot. And the flip-flop switching they do was cool for payroll. But not for tracking Intercontinental Ballistic Missiles in real time and reacting to weather patterns so you can make sure to nuke the right target. Or intercept one that’s trying to nuke you! And in the middle of the Cold War, that was a real problem. Whirlwind didn’t start off with that mission. When MIT kicked things off, computers mostly used vacuum tubes. But they needed something… faster. Perry Crawford had seen the ENIAC in 1945 and recommended a digital computer to run simulations. They were originally going to train pilots in flight simulation and they had Jay Forrestor start working on it in 1947 ‘cause they needed to train more pilots faster. But as with many a true innovation in computing, this one was funded by the military and saw Forrestor team up with Robert Everett to look for a way to run programs fast. This meant they needed to be stored on the device rather than batch modes run off punch cards that got loaded into the system. They wanted something really wild at the time. They wanted to see things happening on screens. It started with flight simulation, which would later become a popular computer game. But as the Cold War set in, the Navy didn’t need to train pilots quite as fast. Instead, then they wanted to watch missiles traveling over the ocean, and they wanted computers that could be programmed to warn that missiles were in the air and potentially even intercept them. This required processing at speeds unheard of at the time. So they got a military grant for a million bucks a year, brought in 175 people and built a 10 ton computer. And they planned to build 2k of random-access memory. To put things in context, the computer we’re recording on today has 16 gigs of memory, roughly 8,000,000 times more storage. And almost immeasurably faster. Also, cheaper. The Williams Tubes they used at first would cost them $1 per bit per month. None of the ways people usually got memory were working. Flip-flopping circuits took to long, other forms of memory at the time were unreliable. And you know what they say about necessity being the mother of invention. By the end of 1949 the computer could solve an equation and output to an oscilloscope, which were used as monitors before we had… um… monitors. An Wang had researched using magnetic fields to switch currents and Forrester ended up trying to do the same thing, but had to manage the project and so he brought in William Papian and Dudley Buck to test various elements until they could find something that would work as memory. After a couple of years they figured it out, and built a core that was 1024 cores, or 32 x 32. They filed for a patent for it in 1951. Wang also got a patent, as did Jan Rajchman from RCA, although MIT would later dispute that Buck had leaked information to Rajchman. Either way they had the first real memory, which would be used for decades to come! The tubes used for processing in the Whirlwind would end up leading Ken Olson to transistors, which led to the transistorized TX-O (the love of many a tech model railroad clubber) and later to Olson founding DEC. Suddenly, the Whirlwind was the fastest computer of the day. They also worked on the first pointing devices used in computing. Light sensing vacuum tubes had been introduced in the 1930s, so they introduced a pen that could interact with the tubes in the oscilloscopes people used to watch objects moving on the screen. There was an optical sensor in the gun that took input from the light shown on the screen. They used light pens to select an object. Today we use fingers. Those would evolve into the Zapper so we could play Duck Hunt by the 80s but began life in missile defense. Whirlwind would evolve into Whirlwind II, and Forrester would end up fathering the SAGE missile defense system on the technology. SAGE, or Semi-Automatic Ground Environment, would weight 250 tons and be the centerpiece of NORAD, or North American Aerospace Defense Command. Remember the movie War Games? That. Dudley Buck would end up giving us content-addressable memory and helium cooled processors that almost ended up with him inventing the microprocessor. Although many of the things he theorized and built on the way to getting a functional “cryotron” as he called superconductors, would be used in the later production of chips. IBM wanted in on these faster computers. So they paid $500,000 to Wang, who would use that money to found Wang Laboratories, which by the 80s would build word processors and microcomputers. Wang would also build a tablet with email, a phone handset, and a word processing tool called Wang Office. That was the 1990 version of an iPad! After SAGE, Forrester would go on to teach for the Sloan School of Management and come up with system dynamics, the ultimate “what if” system. Basically, after he pushed the boundries of what computers could do, helping us to maybe not end up in a nuclear war, he would push the boundaries of social systems. Whirlwind gave us memory, and tons of techniques to study, produce, and test transistorized computers. And without it, no SAGE, and none of the innovations that exploded out of that program. And probably no TX-0, and therefore PDP-1, and all of the innovations that came out of the minicomputer era. It is a recognizable domino on the way from punch cards to interactive computers. So we owe a special thanks to Forrester, Buck, Olson, Papian, and everyone else who had a hand in it. And I owe a special thanks to you, for tuning into this episode of the History Of Computing Podcast. We’re so lucky to have you. Have a great day!
Today we’re going to look at the history of the dial-up computer modem.
Modem stands for modulate/demodulate. That modulation is carying a property (like voice or computer bits) over a waveform. Modems originally encoded voice data with frequency shift keys, but that was developed during World War II. The voices were encoded into digital tones. That system was called SIGSALY. But they called them vocoders at the time.
They matured over the next 17 years. And then came the SAGE air defense system in 1958. Here, the modem was employed to connect bases, missile silos, and radars back to the central SAGE system. These were Bell 101 modems and ran at an amazing 110 baud. Bell Labs, as in AT&T.
A baud is a unit of transmission that is equal to how many times a signal changes state per second. Each of those baud is equivalent to one bit per second. So that first modem was able to process data at 110 bits per second. This isn’t to say that baud is the same as bitrate. Early on it seemed to be but the algorithms sku the higher the numbers.
So AT&T had developed the modem and after a few years they began to see commercial uses for it. So in 1962, they revved that 101 to become the Bell 103. Actually, 103A. This thing used newer technology and better encoding, so could run at 300 bits per second. Suddenly teletypes - or terminals, could connect to computers remotely. But ma’ Bell kept a tight leash on how they were used for those first few years. That, until 1968.
In 1968 came what is known as the Carterphone Decision. We owe a lot to the Carterfone. It bridged radio systems to telephone systems. And Ma Bell had been controlling what lives on their lines for a long time. The decision opened up what devices could be plugged into the phone system. And suddenly new innovations like fax machines and answering machines showed up in the world.
And so in 1968, any device with an acoustic coupler could be hooked up to the phone system. And that Bell 103A would lead to others. By 1972, Stanford Research had spun out a device, Novation, and others. But the Vladic added full duplex and got speeds four times what the 103A worked at by employing duplexing and new frequencies. We were up to 1200 bits per second.
The bit rate had jumped four-fold because, well, competition. Prices dropped and by the late 1970s microcomputers were showing up in homes. There was a modem for the S-100 Altair bus, the Apple II through a Z-80 SoftCard, and even for the Commodore PET. And people wanted to talk to one another. TCP had been developed in 1974 but at this point the most common way to communicate was to dial directly into bulletin board services.
1981 was a pivotal year. A few things happened that were not yet connected at the time. The National Science Foundation created the Computer Science Network, or CSNET, which would result in NSFNET later, and when combined with the other nets, the Internet, replacing ARPANET.
1981 also saw the release of the Commodore VIC-20 and TRS-80. This led to more and more computers in homes and more people wanting to connect with those online services. Later models would have modems.
1981 also saw the release of the Hayes Smartmodem. This was a physical box that connected to the computer of a serial port. The Smartmodem had a controller that recognized commands. And established the Hayes command set standard that would be used to connect to phone lines, allowing you to initiate a call, dial a number, answer a call, and hang up. Without lifting a handset and placing it on a modem. On the inside it was still 300-baud but the progress and innovations were speeding up. And it didn’t seem like a huge deal.
The online services were starting to grow. The French Minitel service was released commercially in 1982. The first BBS that would become Fidonet showed up in 1983. Various encoding techniques started to come along and by 1984 you had the Trailblazer modem, at over 18,000 bits a second. But, this was for specific uses and combined 36 bit/second channels.
The use of email started to increase and the needs for even more speed. We got the ability to connect two USRobotics modems in the mid-80s to run at 2400 bits per second. But Gottfried Ungerboeck would publish a paper defining a theory of information coding and add parity checking at about the time we got echo suppression. This allowed us to jump to 9600 bits in the late 80s.
All of these vendors releasing all of this resulted in the v.21 standard in 1989 from the ITU Telecommunication Standardization Sector (ITU-T). They’re the ones that ratify a lot of standards, like x.509 or MP4. Several other v dot standards would come along as well.
The next jump came with the SupraFaXModem with Rockwell chips, which was released in 1992. And USRobotics brought us to 16,800 bits per second but with errors. But we got v.32 in 1991 to get to 14.4 - now we were talking in kilobits! Then 19.2 in 1993, 28.8 in 1994, 33.6 in 1996. By 1999 we got the last of the major updates, v.90 which got us to 56k. At this point, most homes in the US at least had computers and were going online.
The same year, ANSI ratified ADSL, or Asymmetric Digital Subscriber Lines. Suddenly we were communicating in the megabits. And the dial-up modem began to be used a little less and less. In 2004 Multimedia over Coax Alliance was formed and cable modems became standard. The combination of DSL and cable modems has now all but removed the need for dial up modems. Given the pervasiveness of cell phones, today, as few as 20% of homes in the US have a phone line any more. We’ve moved on.
But the journey of the dial-up modem was a key contributor to us getting from a lot of disconnected computers to… The Internet as we know it today. So thank you to everyone involved, from Ma Bell, to Rockwell, to USRobotics, to Hayes, and so on. And thank you, listeners, for tuning in to this episode of the History of Computing Podcast. We are so lucky to have you. Have a great day.
Today we’re going to look at an operating system from the 80s and 90s called OS/2. OS/2 was a bright shining light for a bit. IBM had a task force that wanted to build a personal computer. They’d been watching the hobbyists for some time and felt they could take off the shelf parts and build a PC. So they did.. But they needed an operating system. They reached out to Microsoft in 1980, who’d been successful with the Altair and so seemed a safe choice. By then, IBM had the IBM Entry Systems Division based out of their Boca Raton, Florida offices. The open architecture allowed them to ship fast. And it afforded them the chance to ship a computer with, check this out, options for an operating system. Wild idea, right? The options initially provided were CP/M and PC DOS, which was MS-DOS ported to the IBM open architecture. CP/M sold for $240 and PC DOS sold for $40. PC DOS had come from Microsoft’s acquisition of 86-DOS from Seattle Computer Products. The PC shipped in 1981, lightning fast for an IBM product. At the time Apple, Atari, Commodore, and were in control of the personal computer market. IBM had dominated the mainframe market for decades and once the personal computer market reached $100 million dollars in sales, it was time to go get some of that. And so the IBM PC would come to be an astounding success and make it not uncommon to see PCs on people’s desks at work or even at home. And being that most people didn’t know a difference, PC DOS would ship on most. By 1985 it was clear that Microsoft had entered and subsequently dominated the PC market. And it was clear that due to the open architecture that other vendors were starting to compete. And after 5 years of working together on PC DOS and 3 versions later, Microsoft and IBM signed a Joint Development Agreement and got to work on the next operating system. One they thought would change everything and set IBM PCs up to dominate the market for decades to come. Over that time, they’d noticed some gaps in DOS. One of the most substantial is that after the projects and files got too big, they became unwieldy. They wanted an object oriented operating system. Another is protected mode. The 286 chips from Intel had protected mode dating back to 1982 and IBM engineers felt they needed to harness that in order to get multi-tasking safely and harness virtual memory to provide better support for all these crazy new windowing things they’d learned with their GUI overlay to DOS called TOPview. So after the Joint Development agreement was signed , IBM let Ed Iacobucci lead the charge on their side and Microsoft had learned a lot from their attempts at a windowing operating system. The two organizations borrowed ideas from all the literature and Unix and of course the Mac. And really built a much better operating system than anything available at the time. Microsoft had been releasing Windows the whole time. Windows 1 came in 1985 and Windows 2 came in 1987, the same year OS/2 1.0 was released. In fact, one of the most dominant PC models to ever ship, the PS/2 computer, would ship that year as well. The initial release didn’t have a GUI. That wouldn’t come until version 1.1 nearly a year later in 1988. SNA shipped to interface with IBM mainframes in that release as well. And TCP/IP and Ethernet would come in version 1.2 in 1989. During this time, Microsoft steadily introduced new options in Windows and claimed both publicly and privately in meetings with IBM that OS/2 was the OS of the future and Windows would some day go away. They would release an extended edition that included a built-in database. Based on protected mode developers didn’t have to call the BIOS any more and could just use provided APIs. You could switch the foreground application using control-escape. In Windows that would become Alt-Tab. 1.2 brought the hpfs file system, bringing longer file names, a journaled file system to protect against data loss during crashes, and extended attributes, similar to how those worked on the Mac. But many of the features would ship in a version of Windows that would be released just a few months before. Like that GUI. Microsoft’s presentation manager came in Windows 2.1 just a few months before OS/2 1.1. Microsoft had an independent sales team. Every manufacturer that bundled Windows meant there were more drivers for Windows so a wider variety of hardware could be used. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. They started something similar to what we’d call a fork today of OS/2. And in 1988 they lured Dave Cutler from Digital who had been the architect of the VMS operating system. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler’s NT would replace all other operating systems in the family with the release of Windows 2000. But by 1990 when Microsoft released Windows 3 they sold millions of copies. Due to great OEM agreements they were on a lot of computers that people bought. The Joint Development Agreement would finally end. IBM had enough of what they assumed meant getting snowed by Microsoft. It took a couple of years for Microsoft to recover. In 1992, the war was on. Microsoft released Windows 3.1 and it was clear that they were moving ideas and people between the OS/2 and Windows teams. I mean, the operating systems actually looked a lot alike. TCP/IP finally shipped in Windows in 1992, 3 years after the companies had co-developed the feature for OS/2. But both would go 32 bit in 1992. OS /2 version 2.0 would also ship, bringing a lot of features. And both took off the blinders thinking about what the future would hold. Microsoft with Windows 95 and NT on parallel development tracks and IBM launched multiple projects to find a replacement operating system. They tried an internal project, Workstation OS, which fizzled. IBM did the unthinkable for Workplace OS. They entered into an alliance with Apple, taking on a number of Apple developers who formed what would be known as the Pink team. The Pinks moved into separate quarters and formed a new company called Taligent with Apple and IBM backing. Taligent planned to bring a new operating system to market in the mid-1990s. They would laser focus on PowerPC chips thus abandoning what was fast becoming the WinTel world. They did show Workspace OS at Comdex one year, but by then Bill Gates was all to swing by the booth knowing he’d won the battle. But they never shipped. By the mid-90s, Taligent would be rolled into IBM and focus on Java projects. Raw research that came out of the project is pretty pervasive today though. Those was an example of a forward looking project, though - and OS/2 continued to be developed with OS/2 Warp (or 3) getting released in 1994. It included IBM Works, which came with a word processor that wasn’t Microsoft Word, a spreadsheet that wasn’t Microsoft Excel, and a database that wasn’t Microsoft Access. Works wouldn’t last past 1996. After all, Microsoft had Charles Simony by then. He’d invented the GUI word processor at Xerox PARC and was light years ahead of the Warp options. And the Office Suite in general was gaining adoption fast. Warp was faster than previous releases, had way more options, and even browser support for early Internet adopters. But by then Windows 95 had taken the market by storm and OS/2 would see a rapidly declining customer base. After spending nearly a billion dollars a year on OS development, IBM would begin downsizing once the battle with Microsoft was lost. Over 1,300 people. And as the number of people dropped, defects with the code grew and the adoption dropped even faster. OS/2 would end in 2001. By then it was clear that IBM had lost the exploding PC market and that Windows was the dominant operating system in use. IBM’s control of the PC had slowly eroded and while they eeked out a little more profit from the PC, they would ultimately sell the division that built and marketed computers to Lenovo in 2005. Lenovo would then enjoy the number one spot in the market for a long time. The blue ocean had resulted in lower margins though, and IBM had taken a different, more services-oriented direction. OS/2 would live on. IBM discontinued support in 2006. It should have probably gone fully open source in 2005. It had already been renamed and rebranded as eComStation first by an IBM Business Partner called Serenity. It would go opensource(ish) and openoffice.org would be included in version two in 2010. Betas of 2.2 have been floating around since 2013 but as with many other open source compilations of projects, it seems to have mostly fizzled out. Ed Iacobucci would go on to found or co-found other companies, including Citrix, which flourishes to this day. So what really happened here. It would be easy, but an over-simplification to say that Microsoft just kinda’ took the operating system. IBM had a vision of an operating system that, similar to the Mac OS, would work with a given set of hardware. Microsoft, being an independent software developer with no hardware, would obviously have a different vision, wanting an operating system that could work with any hardware - you know, the original open architecture that allowed early IBM PCs to flourish. IBM had a big business suit and tie corporate culture. Microsoft did not. IBM employed a lot of computer scientists. Microsoft employed a lot of hackers. IBM had a large bureaucracy, Microsoft could build an operating system like NT mostly based on hiring a single brilliant person and rapidly building an elite team around them. IBM was a matrixed organization. I’ve been told you aren’t an enterprise unless you’re fully matrixed. Microsoft didn’t care about all that. They just wanted the marketshare. When Microsoft abandoned OS/2, IBM could have taken the entire PC market from them. But I think Microsoft knew that the IBM bureaucracy couldn’t react quickly enough at an extremely pivotal time. Things were moving so fast. And some of the first real buying tornados just had to be reacted to at lightning speeds. These days we have literature and those going through such things can bring in advisors or board members to help them. Like the roles Marc Andreeson plays with Airbnb and others. But this was uncharted territory and due to some good, shrewd and maybe sometimes downright bastardly decisions, Microsoft ended up leap-frogging everyone by moving fast, sometimes incurring technical debt that would take years to pay down, and grabbing the market at just the right time. I’ve heard this story oversimplified in one word: subterfuge. But that’s not entirely fair. When he was hired in 1993, Louis Gerstner pivoted IBM from a hardware and software giant into a leaner services organization. One that still thrives today. A lot of PC companies came and went. And the PC business infused IBM with the capital to allow the company to shoot from $29 billion in revenues to $168 billion just 9 years later. From the top down, IBM was ready to leave red oceans and focus on markets with fewer competitors. Microsoft was hiring the talent. Picking up many of the top engineers from the advent of interactive computing. And they learned from the failures of the Xeroxes and Digital Equipments and IBMs of the world and decided to do things a little differently. When I think of a few Microsoft engineers that just wanted to build a better DOS sitting in front of a 60 page refinement of how a feature should look, I think maybe I’d have a hard time trying to play that game as well. I’m all for relentless prioritization. And user testing features and being deliberate about what you build. But when you see a limited window, I’m OK acting as well. That’s the real lesson here. When the day needs seizing, good leaders will find a way to blow up the establishment and release the team to go out and build something special. And so yah, Microsoft took the operating system market once dominated by CP/M and with IBM’s help, established themselves as the dominant player. And then took it from IBM. But maybe they did what they had to do… Just like IBM did what they had to do, which was move on to more fertile hunting grounds for their best in the world sales teams. So tomorrow, think of bureaucracies you’ve created or had created to constrain you. And think of where they are making the world better vs where they are just giving some controlling jackrabbit a feeling of power. And then go change the world. Because that is what you were put on this planet to do. Thank you so much for listening in to this episode of the history of computing podcast. We are so lucky to have you.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at one of the more underwhelming operating systems released: Windows 1.0. In our previous episode, we covered Windows 1.0. Released in 1985, it was cute. Windows 2 came in 1987 and then Windows 3 came in 1990. While a war of GUIs had been predicted, it was clear by 1990 that Microsoft was winning this war. Windows 3.0 sold 10 million licenses. It was 5 megabytes fully installed and came on floppies. The crazy thing about Windows 3 is that it wasn’t really supposed to happen. IBM had emerged as a juggernaut in the PC industry, largely on the back of Microsoft DOS. Windows 1 and 2 were fine, but IBM seeing that Microsoft was getting too powerful would not run it on their computers. Instead, they began work on a new operating system called OS/2, which was initially released in 1987. But David Weise from the Windows team at Microsoft wanted to reboot the Windows project. He brought in Murray Sargent and the two started work in 1988. They added a debugger, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint, and I’m pretty sure everyone knew they were on to something big. IBM found out and Microsoft placated them by saying it would kill Windows after they spent all this money on it. You could tell with the way they upgraded the UI, with how they made memory work so much better, and with the massive improvements to multitasking. Lies. They added File Manager, which would later evolve into File Explorer. They added the Control Panel which lives on to the modern era of Windows and they made it look more like the one in the Mac OS at the time. They added the Program Manager (or progman.exe), parts of which would go on to Windows Explorer and other parts which would form the Start Menu in the future. But it survived until XP Service Pack 2. They brought us up to 16 simultaneous colors and added support for graphics cards that could give us 256 colors. Pain was upgraded to Painbrush and they outsourced some of the graphics for the famed Microsoft Solitaire to Susan Kare. They also added macros using a program called Recorder, which Apple released the year before with Macro Maker. They raised the price from $100 to $149.95. And they sold 4 million copies in the first year, a huge success at the time. They added a protected mode for applications, which had supposedly been a huge reason IBM insisted on working on OS/2. One result of all of this was that IBM and Microsoft would stop developing together and Microsoft would release their branch, then called Windows NT, in 1991. NT had a new 32-bit API. The next year they would release Windows 3.1 and Windows for Workgroups 3.1, which would sell another 3 million copies. This was the first time I took Windows seriously and it was a great release. They replaced Reverse with the now-iconic Minesweeper. They added menuing customization. They removed Real Mode. They added support to launch programs using command.com. They brought in TrueType fonts and added Arial, Courier New, and the Times New Roman fonts. They added multimedia support. And amongst the most important additions, they added the Windows Registry, which still lives on today. That was faster that combing through a lot of .ini files for settings. The Workgroups version also added SMB file sharing and supported NetBIOS and IPX networking. The age of the Local Area Network, or LAN, was upon us. You could even install Winsock to get the weird TCP/IP protocol to work on Windows. Oh and remember that 32-bit API, you could install the Win32 add-on to get access to that. And because the browser wars would be starting up, by 1995 you could install Internet Explorer on 3.1. I remember 3.11 machines in the labs I managed in college and having to go computer to computer installing the browser on each. And installing Mosaic on the Macs. And later installing Netscape on both. I seem to remember that we had a few machines that ran Windows on top of CP/M successor Dr DOS. Nothing ever seemed to work right for them, especially the Internets. So… Where am I going with this episode? Windows 3 set Microsoft up to finally destroy CP/M, protect their market share from Microsoft and effectively take over the operating system, allowing them to focus on adjacencies like Internet and productivity tools. This ultimately made Bill Gates the richest man in business and set up a massive ride in personal computing. But by the time Windows 95 was announced, enough demand had been generated to sell 40 million copies. Compaq, Dell, Gateway, HP, and many others had cannibalized the IBM desktop business. Intel had AMD nipping at their heels. Mother board, power supply, and other components had become commodities. But somehow, Microsoft had gone from being the cutesy little maker of BASIC to owning the market share for Operating systems with NT, Windows 95, 98, Millenium, 2000, XP, 7, 8, 10, and it wasn’t until Google made Android and ChromeOS. They did it, not because they were technologically the best solution available. Although arguably the APIs in early Windows were better than any other available solution. And developing Windows NT alongside 95 and on once they saw there would be a need for a future OS was a master-stroke. There was a lot of subterfuge and guile. And there were a lot of people burned during the development but there’s a distinct chance that the dominance of a single operating system really gave the humans the ability to focus on a single OS to care about and an explosion in the number of software titles. Once that became a problem, and was stifling innovation, Steve Jobs was back at Apple, Android was on the rise, and Linux was always an alternative for the hacker-types and given a good market potential it’s likely that someone could have built a great windowing system on top of it. Oh wait, they did. Many times. So whether we’re Apple die-hards, Linux blow-hards, crusty old Unix grey beards, or maybe hanging on to our silly CP/M machines to write scripts on, we still owe Microsoft a big thanks. Without their innovations the business world might have been fragmented so much on the operating system side that we wouldn’t have gotten the productivity levels we needed out of apps. And so Windows 95 replaced Windows 3, and Windows 3 rode off into the sunset. But not before leaving behind a legacy of the first truly dominant OS. Thanks for everything, Microsoft, the good and the bad. And thanks to you, sweet listeners. It’s been a blast. You’re the best. Unlike Windows 1. Till next time, have a great day!
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about the IBM System/360. System/360 was a family of mainframes. IBM has done a great job over the decades following innovations rather than leading them, but there might not be another single innovation that was as influential on computing as the System/360. But it’s certainly hard to think of one. IBM had been building mainframes with the 700 and 7000 series of systems since 1952, so they weren’t new to the concept in 1964 when the S360 was announced (also when Disney released Mary Poppins and ). But they wanted to do something different. They were swimming in a red ocean of vendors who had been leading the technology and while they had a 70 percent market share, they were looking to cement a long-term leadership position in the emerging IT industry. So IBM decided to take a huge leap forward and brought the entire industry with them. This was a risky endeavor. Thomas Watson Jr, son of the great IBM business executive Thomas Watson Sr, bet the proverbial farm on this. And won that bet. In all, IBM spent 5 billion dollars in mid-1960s money, which would be $41B today with a cumulative 726.3% rate of inflation. To put things in context around the impact of the mainframe business, IBM revenues were at $3.23 B in 1964 and more than doubled to $7.19 B by 1970 when the next edition, the 370, was released. To further that context, the Manhattan Project, which resulted in the first atomic bomb, cost $2 B. IBM did not have a project this large before the introduction of the S360 and has not had one in the more than 50 years since then. Further context, the total value of all computers deployed at the start of the project was only $10B. These were huge. They often occupied a dedicated room. The front panel had 12 switches, just to control the electricity that flowed through them. They had over 250 lights. It was called “System” 360 because it was a system, meaning you could hook disk drives, printers, and other peripherals up to them. It had 16 32 bit registers and four 64 bit floating point registers for the crazy math stuffs. The results were fast, with over 1000 orders in the first month and another 1000 by years end. IBM sales skyrocketed and computers suddenly showing up in businesses large and small. The total inventory of computers in the world jumped to a $24B value in just 5 years. A great example of the impact they had can be found in the computer the show Mad Men featured, where the firm got an S360 and it served as a metaphor for how the times were about to change - the computer was analytical, where Don worked through inspiration. Just think, an interactive graphics display that let business nerds do what only computer nerds could do before. This was the real start to “data driven” decision making. By 1970 IBM had deployed 35k mainframes throughout the US. They spawned enough huge competitors that the big mainframe players were referred to as Snow White and the 7 dwarfs and later just “The Bunch” which consisted of Burroughs, NCR, Control Data, Honeywell, and the Univac Division of Sperry Rand. If you remember the earlier episode on Grace Hopper, she spent some time there. Thomas Watson Jr. retired the following year in 1971 after suffering a heart attack, leaving behind one of the greatest legacies of anyone in business. He would serve as an ambassador to Russia from 79 to 81, and remain an avid pilot in retirement. He passed away in 1993. A lot of things sell well. But sales and revenue aren’t the definition that shapes a legacy. The S360 created so many standards and pushed technology forward that the business legacy is almost a derivative of the technical legacy. What standards did the S360 set? Well, the bus was huge. Stndardizing I/O would allow vendors to build expansion and would ultimately become the standard later. The 8-bit byte is still used today and bucked the trend of accessing variable sized arbitrary bit addressing. To speed up larger and larger transactions, the S360 also gave us Byte-addressable memory with 24 bit addressing and 32-bit words. The memory was small and fast with control code stored there permanently, known as microcode memory. This meant you didn’t have to hand wire each memory module into the processor. The control store also lead to emulators, as you could emulate a previous IBM model, the 1401, in the control store. IBM spent $13 M on the patent for the tech that came out of MIT to get access to the best memory on the market. The S360 made permanent store a main-stay. IBM had been using tape storage since 1952. 14 inch disk drives were smaller than 24 inch disk drives used in previous models, had 100x the storage capacity and accessed data 10 times faster. The S360 also brought with it new programming paradigms. We got hexadecimal Floating Point Architectures. These would be important to New Drug Applications to the FDA, weather predicting, geophysics, and graphics databases. We also got Extended Binary Coded Decimal Interchange Code or EBCDIC for short is character encoding in the 8th bit. This came from moving punch cards to persistent storage on the computers. That 8th bit was from two zone and number punches on cards which made up two bits and another to indicate a small s or a large S. EBCDIC was not embraced by the rest of the computer hacker culture. One example was: "So the American government went to IBM to come up with an encryption standard, and they came up with… EBCDIC!" ASCII has mostly been accepted as the standard for encoding characters (before and after EBCDIC). Solid Logic Technology (or SLT) also came with the S360. These flip chip-mounted packages contained transistors, diodes and resistors in a ceramic substrate that had sockets on one edge and could be plugged into the backplane of a computer. Think of these as a precursor to the microchip and the death of vacuum tubes. The central processor could run machine language programs. It ran OS/360, officially known as IBM System/360 Operating System. You could load programs written in COBOL and FORTRAN with many organizations still running code written way back then. The way we saw computers and they way they were made also changed. Architecture vs implementation was another substantial innovation. Before the S360, computers were built for specific use cases. They were good at business and they were good at business or they were good at science. But one system wasn’t typically good at both tasks. In fact, IBM had 7 mainframe lines at this point, sometimes competing with each other. The S360 allowed them to unify that into the size and capacity of a machine rather than the specific use case. We went from: “here’s your scientific mainframe” or “here’s your payroll mainframe” to “here’s your computer”. But the Model 30 was Introduced in 1965, along with 5 other initial models, the 40, 50, 60, 62, and 70. The tasks were not specific to each model and a customer could grow into additional models, or if the needs weren’t growing, could downgrade to a lower model in the planned 5 year obscelence cycle that computers seem to have. Given all of this, the project was huge. So big that it led to Thomas Watson forcing his own brother Dick Watson out of IBM and moving the project to be managed by Fred Brooks, who worked with Chief Architect Gene Amdahl. John Opel managed the launch in 1964. In large part due to his work on the S360 project, Brooks would go on to write a book called The Mythical Man Month, which brought us what’s now referred to as Brooks’ Law, which states that adding additional developers does not speed up a software project, but instead makes it take longer. Amdahl would go on to found his own computer company. In all, there were twenty models of the S360, although only 14 shipped - and IBM had sold 35,000 by 1970. While the 60 in S360 would go on to refer to the decade and the follow-on S370 would define computing in the 70s, the S360 was sold until 1978. With a two-thirds market share came anti-trust cases, which saw software suddenly being sold separately and leasing companies extending that 5 year obscelecence - thus IBM leassors becoming the number one competition. Given just how much happened in the 13 year life of the System/360, even the code endures in some cases. The System Z servers are still compatible with many applications written for the 360. The S360 is iconic. The S360 was bold. It set IBM on a course that would shape their future and the future of the world. But most importantly, before the S360 computers were one thing used for really big jobs - after the S360, they were everywhere and people started to think about business in terms of a new lexicon like “data” and “automation.” It lead to no one ever getting fired for buying IBM and set the IT industry on a course to become what it is today. The revolution was coming no matter what. But not being afraid to refactor everything in such a big, bold demonstration of market dominance made IBM the powerhouse it is even today. So next time you have to refactor something, think of the move you’re making - and ask yourself What Would Watson Do? Or just ask Watson.
We go knee-deep into available computing technology in the late 1950's and what it was used for: Missles and Satellites. We see the creation of the NASA RTCC in a muddy field and revisit what IBM is up to.
The early 1960's were full of gigantic leaps in computing technology, and a lot of it was used in NASA to get astronauts into space! We see the first use of the word Mainframes, and see how Neil Armstrong used a new invention called "secondary storage" to try to save his own life!
In this episode, we solve the problem of calculating from inaccurate tables, learn about NYU Art Professor and inventor of the first electrical language: Samuel Morse, and we discover how the Tabulating Machine company got its start and made one man very rich (and the census a lot easier). Tune in!
DTSS, or The Dartmouth Time Sharing System, began at Dartmouth College in 1963. That was the same year Project MAC started at MIT, which is where we got Multics, which inspired Unix. Both contributed in their own way to the rise of the Time Sharing movement, an era in computing when people logged into computers over teletype devices and ran computing tasks - treating the large mainframes of the era like a utility.
The notion had been kicking around in 1959 but then John McCarthy at MIT started a project on an IBM 704 mainframe. And PLATO was doing something similar over at the University of Illinois, Champaign-Urbana. 1959 is also when John Kemeny and Thomas Kurtz at Dartmouth College bought Librascope General Purpose computer, then being made in partnership with the Royal Typewriter Company and Librascope - whichwould later be sold off to Lockheed Martin.
Librascope had Stan Frankel - who had worked on both the Manhattan Project and the ENIAC. And he architected the LGP-30 in 1956, which ended up at Dartmouth. At this point, the computer looked like a desk with a built-in typewriter.
Kurtz had four students that were trying to program in ALGOL 58. And they ended up writing a language called DOPE in the early 60s. But they wanted everyone on campus to have access to computing - and John McCarthy said why not try this new time sharing concept. So they went to the National Science Foundation and got funding for a new computer, which to the chagrin of the local IBM salesman, ended up being a GE-225.
This baby was transistorized. It sported 10,0000 transistors and double that number of diodes. It could do floating-point arithmetic, used a 20-bit word, and came with 186,000 magnetic cores for memory. It was so space aged that one of the developers, Arnold Spielberg, would father one of the greatest film directors of all time. Likely straight out of those diodes.
Dartmouth also picked up a front-end processor called a DATANET-30 from GE. This only had an 18-bit word size but could do 4k to 16k words and supported hooking up 128 terminals that could transfer data to and from the system at 3,000 bits a second using the Bell 103 modem. Security wasn’t a thing yet, so these things had direct memory access to the 225, which was a 235 by the time they received the computer.
They got to work in 1963, installing the equipment and writing the code. The DATANET-30 received commands from the terminals and routed them to the mainframe. They scanned for commands 110 times per second from the terminals and ran them when the return key was pressed on a terminal. If the return key was a command they queued it up to run, taking into account routine tasks the computer might be doing in the background.
Keep in mind, the actual CPU was only doing one task at a time, but it seemed like it was multi-tasking! Another aspect of democratizing computing across campus was to write a language that was more approachable than a language like Algol. And so they released BASIC in 1964, picking up where DOPE left off, and picking up a more marketable name.
Here we saw a dozen undergraduates develop a language that was as approachable as the name implies. Some of the students went to Phoenix, where the GE computers were built. And the powers at GE saw the future.
After seeing what Dartmouth had done, GE ended up packaging the DATANET-30 and GE-235 as one machine, which they marketed as the GE-265 the next year. And here we got the first commercially viable time-sharing system, which started a movement. One so successful that GE decided to get out of making computers and focus instead on selling access to time sharing systems. By 1968 they actually ended up shooting up to 40% of the market of the day.
Dartmouth picked up a GE Mark II in 1966 and got to work on DTSS version 2. Here, they added some of the concepts coming out of the Multics project that was part of Project MAC at MIT and built on previous experiences. They added pipes and communication files to promote inter-process communications - thus getting closer to the multiple user conferencing like what was being done on PLATO with Notes.
Things got more efficient and they could handle more and more concurrent sessions. This is when they went from just wanting to offer computing as a basic right on campus to opening up to schools in the area. Nearby Hanover High School started first and by 1967 they had over a dozen. Using further grants from NSF they added another dozen schools to what by then they were calling the Kiewit Network. Then added other smaller colleges and by 1971 supported a whopping 30,000 users. And by 73 supported leased line connections all the way to Ohio, Michigan, New York, and even Montreal.
The system continued on in one form or another, allowing students to code in FORTRAN, COBOL, LISP, and yes… BASIC. It became less of a thing as Personal Computers started to show up here and there. But BASIC didn’t. Every computer needed a BASIC. But people still liked to connect on the system and share information. At least, until the project was finally shut down in 1999. Turns out we didn’t need time sharing once the Internet came along.
Following the early work done by pioneers, companies like Tymshare and CompuServe were born. Tymshare came out of two of the GE team, Thomas O’Rourke and David Schmidt. They ran on SDS hardware and by 1970 had over 100 people, focused on time sharing with their Tymnet system and spreading into Europe by the mid-70s, selling time on their systems until the cost of personal computing caught up and they were acquired by McDonnell Douglas in 1984.
CompuServe began on a PDP-10 and began similarly but by the time they were acquired by H&R Block had successfully pivoted into a dial-up online services company and over time focused on selling access to the Internet. And they survived through to an era when they migrated their own proprietary tooling to HTML in the late 90s - although they were eventually merged into AOL and are now a part of Verizon media. So the pivot bought them an extra decade or so.
Time sharing and BASIC proliferated across the country and then the world from Dartmouth. Much of this - and a lot of personal stories from the people involved can be found in Dr Joy Rankin’s “A People’s History of Computing in the United States.” Published in 2018, it’s a fantastic read that digs in deep on the ways that many of these systems evolved. There are other works, but she does a phenomenal job tying events into one another.
One consistent point across her book is around societal impact. These pioneers democratized access to computing. Many of those who built businesses around time sharing missed the rapidly falling price of chips and the ready access to personal computers that were coming. They also missed that BASIC would be monetized by companies like Microsoft. But they brought computing to high schools in the area, established blueprints for teaching that are used through to this day, and as Grace Hopper did a generation before - made us think of even more ways to make programming more accessible to a new generation with BASIC.
One other author of note here is John Kemeny. His book “Man and the computer” is a must read. He didn’t have the knowledge of the upcoming personal computing - but far more prophetic than not around cloud operations as we get back to a time sharing-esque model of computing. And we do owe him, Kurtz, and everyone else involved a huge debt for their work. Many others pushed the boundaries of what was possible with computers. They pushed the boundaries of what was possible with accessibility. And now we have ubiquity.
So when we see something complicated. Something that doesn’t seem all that approachable. Maybe we should just wonder if - by some stretch - we can make it a bit more BASIC. Like they did.
(OldComputerPods) ©Sean Haas, 2020