Y2K Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we’re going to cover the y2k bug, millennium bug, or y2k scare as they called it in retrospect. Once upon a time, computers were young and the industry didn’t consider software that might be used beyond the next few years. Things were changing so fast. Memory was precious and the amount of time it took to commit a date to a database as a transaction. Many of the original big, complex software titles were written in the 1960s and 70s as mainframes began to be used for everything from controlling subways to coordinating flights. The Millennium Bug was a result of the year being used as two digits, not worrying about the 19 in the year. As the year 2000 got closer and closer, the crazy began because some systems wouldn’t interpret the year” 00" properly and basically the world would end. All because programmers didn’t plan on dates that spanned the century. And I had a job that had me on a flight to San Francisco so I could be onsite the day after the clock struck 1/1/2000. You know, just in case. The fix was to test the rollover of the year, apply a patch if there was a problem, and then test test test. We had feverishly been applying patches for months by y2k and figured we were good but you know, just in case, I needed to be on a flight. By then the electric grid and nuclear power plans and flight control and controls for buildings and anything else you could think of were hooked up to computers. There were computers running practically every business, and a ferver had erupted that the world might end because we didn’t know what might crash that morning. I still remember the clock striking midnight. I was at a rave in Los Angeles that night and it was apparent within minutes that the lights hadn’t gone off at the electric daisy carnival. We were all alive. The reports on the news were just silly headlines to grab attention. Or was it that we had worked out butts off to execute well planned tactics to analyze, patch, or replace every single piece of software for customers? A lot of both I suspect. It was my first big project in the world of IT and I learned a lot from seasoned project managers who ran it like a well oiled machine. The first phase was capturing a list of all software run, some 500 applications at the time. The spreadsheet was detailed, including every component in computers, device drivers, and any systems that those devices needed to communicate with. By the time it was done there were 5000 rows and a hundred columns and we started researching which were y2k compliant and which weren’t. My team was focused on Microsoft Exchange prior to the big push for y2k compliance so we got pulled in to cover mail clients, office, and then network drivers since those went quickly. Next thing we knew we were getting a crash course in Cisco networking. I can still remember the big Gantt chart that ran the project. While most of my tasks are managed in Jira these days I still fall back to those in times of need. We had weekly calls tracking our progress, and over the course of a year watched a lot of boxes get checked. I got sent all over the world to “touch” computers in every little office, meeting people who did all kinds of jobs and so used all kinds of software. By the time the final analysis tasks were done we had a list of patches that each computer needed and while other projects were delayed, we got them applied or migrated people to other software. It was the first time I saw how disruptive switching productivity software was to people without training. We would cover that topic in a post Mortem after the project wrapped. And it all happened as we watched the local news in each city we visited having a field day with everything from conspiracy theories to doomsday reports. It was a great time to be young, hungry, and on the road. And we nailed that Gantt chart two months early. We got back to work on our core projects and waited. Then the big day came. The clock struck midnight as I was dancing to what we called techno at the time and I pulled an all-nighter, making it to the airport just in time for my flight. You could see the apprehension about flying on the faces of the passengers and you could feel the mood relax when we landed. I took the train into the city and was there when everyone started showing up for work. Their computers all booted up and they got to work. No interruptions. Nothing unexpected. We knew though. We’d run our simulations. We’d flashed many a bios, watched many a patch install, with status bars crawling, waiting to see what kind of mayhem awaited us after a reboot. And we learned the value of preparation, just as the media downplayed the severity saying it was all a bunch of nothing. They called it a scare. I called it a crisis averted. And an education. So thank you to the excellent project managers for teaching me to be detail oriented. For teaching me to measure twice and cut once. And for teaching me that no project is too big to land ahead of time and on budget. And thank you, listeners, for joining me on this episode of the history of computing podcast. I hope you Have a great day!
This episode I face my greatest fears: computer bugs. We are going to dive into the origin of the term, and examine the origins of debugging. The simple fact is that as soon as computers hit the scene we start finding bugs. Debugging follows very soon after. That part's not too surprising, it's the specifics that get interesting. Modern debugging methods we still use today were first developed on ENIAC, a machine that's anything but modern.