The only online repository for audio about antiquated technology. Here you can find all the best podcasts covering the history of computers, their evolution, and where old technology lives in the modern day.
Investors have pumped capital into emerging markets since the beginning of civilization. Egyptians explored basic mathematics and used their findings to build larger structures and even granaries to allow merchants to store food and serve larger and larger cities. Greek philosophers expanded on those learnings and applied math to learn the orbits of planets, the size of the moon, and the size of the earth. Their merchants used the astrolabe to expand trade routes. They studied engineering and so learned how to leverage the six simple machines to automate human effort, developing mills and cranes to construct even larger buildings. The Romans developed modern plumbing and aqueducts and gave us concrete and arches and radiant heating and bound books and the postal system.
Some of these discoveries were state sponsored; others from wealthy financiers. Many an early investment was into trade routes, which fueled humanities ability to understand the world beyond their little piece of it and improve the flow of knowledge and mix found knowledge from culture to culture.
As we covered in the episode on clockworks and the series on science through the ages, many a scientific breakthrough was funded by religion as a means of wowing the people. And then autocrats and families who’d made their wealth from those trade routes. Over the centuries of civilizations we got institutions who could help finance industry.
Banks loan money using an interest rate that matches the risk of their investment. It’s illegal, going back to the Bible to overcharge on interest. That’s called usury, something the Romans realized during their own cycles of too many goods driving down costs and too few fueling inflation. And yet, innovation is an engine of economic growth - and so needs to be nurtured.
The rise of capitalism meant more and more research was done privately and so needed to be funded. And the rise of intellectual property as a good. Yet banks have never embraced startups.
The early days of the British Royal Academy were filled with researchers from the elite. They could self-fund their research and the more doing research, the more discoveries we made as a society. Early American inventors tinkered in their spare time as well. But the pace of innovation has advanced because of financiers as much as the hard work and long hours. Companies like DuPont helped fuel the rise of plastics with dedicated research teams. Railroads were built by raising funds. Trade grew. Markets grew. And people like JP Morgan knew those markets when they invested in new fields and were able to grow wealth and inspire new generations of investors. And emerging industries ended up dominating the places that merchants once held in the public financial markets.
Going back to the Venetians, public markets have required regulation. As banking became more a necessity for scalable societies it too required regulation - especially after the Great Depression. And yet we needed new companies willing to take risks to keep innovation moving ahead., as we do today And so the emergence of the modern venture capital market came in those years with a few people willing to take on the risk of investing in the future.
John Hay “Jock” Whitney was an old money type who also started a firm. We might think of it more as a family office these days but he had acquired 15% in Technicolor and then went on to get more professional and invest. Jock’s partner in the adventure was fellow Delta Kappa Epsilon from out at the University of Texas chapter, Benno Schmidt. Schmidt coined the term venture capital and they helped pivot Spencer Chemicals from a musicians plant to fertilizer - they’re both nitrates, right? They helped bring us Minute Maid. and more recently have been in and out of Herbalife, Joe’s Crab Shack, Igloo coolers, and many others. But again it was mostly Whitney money and while we tend to think of venture capital funds as having more than one investor funding new and enterprising companies.
And one of those venture capitalists stands out above the rest. Georges Doriot moved to the United States from France to get his MBA from Harvard. He became a professor at Harvard and a shrewd business mind led to him being tapped as the Director of the Military Planning Division for the Quartermaster General. He would be promoted to brigadier general following a number of massive successes in the research and development as part of the pre-World War II military industrial academic buildup.
After the war Doriot created the American Research and Development Corporation or ARDC with the former president of MIT, Karl Compton, and engineer-turned Senator Ralph Flanders - all of them wrote books about finance, banking, and innovation. They proved that the R&D for innovation could be capitalized to great return. The best example of their success was Digital Equipment Corporation, who they invested $70,000 in in 1957 and turned that into over $350 million in 1968 when DEC went public, netting over 100% a year of return. Unlike Whitney, ARDC took outside money and so Doriot became known as the first true venture capitalist.
Those post-war years led to a level of patriotism we arguably haven’t seen since. John D. Rockefeller had inherited a fortune from his father, who built Standard Oil. To oversimplify, that company was broken up into a variety of companies including what we now think of as Exxon, Mobil, Amoco, and Chevron. But the family was one of the wealthiest in the world and the five brothers who survived John Jr built an investment firm they called the Rockefeller Brothers Fund.
We might think of the fund as a social good investment fund these days. Following the war in 1951, John D Rockefeller Jr endowed the fund with $58 million and in 1956, deep in the Cold War, the fund president Nelson Rockefeller financed a study and hired Henry Kissinger to dig into the challenges of the United States. And then came Sputnik in 1957 and a failed run for the presidency of the United States by Nelson in 1960.
Meanwhile, the fund was helping do a lot of good but also helping to research companies Venrock would capitalize. The family had been investing since the 30s but Laurance Rockefeller had setup Venrock, a mashup of venture and Rockefeller. In Venrock, the five brothers, their sister, MIT’s Ted Walkowicz, and Harper Woodward banded together to sprinkle funding into now over 400 companies that include Apple, Intel, PGP, CheckPoint, 3Com, DoubleClick and the list goes on. Over 125 public companies have come out of the fund today with an unimaginable amount of progress pushing the world forward.
The government was still doing a lot of basic research in those post-war years that led to standards and patents and pushing innovation forward in private industry. ARDC caught the attention of a number of other people who had money they needed to put to work. Some were family offices increasingly willing to make aggressive investments. Some were started by ARDC alumni such as Charlie Waite and Bill Elfers who with Dan Gregory founded Greylock Partners. Greylock has invested in everyone from Red Hat to Staples to LinkedIn to Workday to Palo Alto Networks to Drobo to Facebook to Zipcar to Nextdoor to OpenDNS to Redfin to ServiceNow to Airbnb to Groupon to Tumblr to Zenprise to Dropbox to IFTTT to Instagram to Firebase to Wandera to Sumo Logic to Okta to Arista to Wealthfront to Domo to Lookout to SmartThings to Docker to Medium to GoFundMe to Discord to Houseparty to Roblox to Figma. Going on 800 investments just since the 90s they are arguably one of the greatest venture capital firms of all time.
Other firms came out of pure security analyst work. Hayden, Stone, & Co was co-founded by another MIT grad, Charles Hayden, who made his name mining copper to help wire up the world in what he expected to be an increasingly electrified world. Stone was a Wall Street tycoon and the two of them founded a firm that employed Joe Kennedy, the family patriarch, Frank Zarb, a Chairman of the NASDAQ and they gave us one of the great venture capitalists to fund technology companies, Arthur Rock.
Rock has often been portrayed as the bad guy in Steve Jobs movies but was the one who helped the “Traitorous 8” leave Shockley Semiconductor and after their dad (who had an account at Hayden Stone) mentioned they needed funding, got serial entrepreneur Sherman Fairchild to fund Fairchild Semiconductor. He developed tech for the Apollo missions, flashes, spy satellite photography - but that semiconductor business grew to 12,000 people and was a bedrock of forming what we now call Silicon Valley. Rock ended up moving to the area and investing. Parlaying success in an investment in Fairchild to invest in Intel when Moore and Noyce left Fairchild to co-found it.
Venture Capital firms raise money from institutional investors that we call limited partners and invest that money. After moving to San Francisco, Rock setup Davis and Rock, got some limited partners, including friends from his time at Harvard and invested in 15 companies, including Teledyne and Scientific Data Systems, which got acquired by Xerox, taking their $257,000 investment to a $4.6 million dollar valuation in 1970 and got him on the board of Xerox. He dialed for dollars for Intel and raised another $2.5 million in a couple of hours, and became the first chair of their board. He made all of his LPs a lot of money.
One of those Intel employees who became a millionaire retired young. Mike Markulla invested some of his money and Rock put in $57,000 - growing it to $14 million and went on to launch or invest in companies and make billions of dollars in the process.
Another firm that came out of the Fairchild Semiconductor days was Kleiner Perkins. They started in 1972, by founding partners Eugene Kleiner, Tom Perkins, Frank Caufield, and Brook Byers. Kleiner was the leader of those Traitorous 8 who left William Shockley and founded Fairchild Semiconductor. He later hooked up with former HP head of Research and Development and yet another MIT and Harvard grad, Bill Perkins. Perkins would help Corning, Philips, Compaq, and Genentech - serving on boards and helping them grow.
Caufield came out of West Point and got his MBA from Harvard as well. He’d go on to work with Quantum, AOL, Wyse, Verifone, Time Warner, and others. Byers came to the firm shortly after getting his MBA from Stanford and started four biotech companies that were incubated at Kleiner Perkins - netting the firm over $8 Billion dollars. And they taught future generations of venture capitalists. People like John Doerr - who was a great seller at Intel but by 1980 graduated into venture capital bringing in deals with Sun, Netscape, Amazon, Intuit, Macromedia, and one of the best gambles of all time - Google. And his reward is a net worth of over $11 billion dollars. But more importantly to help drive innovation and shape the world we live in today.
Kleiner Perkins was the first to move into Sand Hill Road. From there, they’ve invested in nearly a thousand companies that include pretty much every household name in technology. From there, we got the rise of the dot coms and sky-high rent, on par with Manhattan. Why? Because dozens of venture capital firms opened offices on that road, including Lightspeed, Highland, Blackstone, Accel-KKR, Silver Lake, Redpoint, Sequoia, and Andreesen Horowitz.
Sequoia also started in the 70s, by Don Valentine and then acquired by Doug Leone and Michael Moritz in the 90s. Valentine did sales for Raytheon before joining National Semiconductor, which had been founded by a few Sperry Rand traitors and brought in some execs from Fairchild. They were venture backed and his background in sales helped propel some of their earlier investments in Apple, Atari, Electronic Arts, LSI, Cisco, and Oracle to success. And that allowed them to invest in a thousand other companies including Yahoo!, PayPal, GitHub, Nvidia, Instagram, Google, YouTube, Zoom, and many others.
So far, most of the firms have been in the US. But venture capital is a global trend.
Masayoshi Son founded Softbank in 1981 to sell software and then published some magazines and grew the circulation to the point that they were Japan’s largest technology publisher by the end of the 80s and then went public in 1994. They bought Ziff Davis publishing, COMDEX, and seeing so much technology and the money in technology, Son inked a deal with Yahoo! to create Yahoo! Japan. They pumped $20 million into Alibaba in 2000 and by 2014 that investment was worth $60 billion. In that time they became more aggressive with where they put their money to work. They bought Vodafone Japan, took over competitors, and then the big one - they bought Sprint, which they merged with T-Mobile and now own a quarter of the combined companies. An important aspect of venture capital and private equity is multiple expansion. The market capitalization of Sprint more than doubled with shares shooting up over 10%. They bought Arm Limited, the semiconductor company that designs the chips in so many a modern phone, IoT device, tablet and even computer now.
As with other financial firms, not all investments can go great. SoftBank pumped nearly $5 billion into WeWork. Wag failed. 2020 saw many in staff reductions. They had to sell tens of billions in assets to weather the pandemic. And yet with some high profile losses, they sold ARM for a huge profit, Coupang went public and investors in their Vision Funds are seeing phenomenal returns across over 200 companies in the portfolios.
Most of the venture capitalists we mentioned so far invested as early as possible and stuck with the company until an exit - be it an IPO, acquisition, or even a move into private equity. Most got a seat on the board in exchange for not only their seed capital, or the money to take products to market, but also their advice. In many a company the advice was worth more than the funding. For example, Randy Komisar, now at Kleiner Perkins, famously recommended TiVo sell monthly subscriptions, the growth hack they needed to get profitable.
As the venture capital industry grew and more and more money was being pumped into fueling innovation, different accredited and institutional investors emerged to have different tolerances for risk and different skills to bring to the table. Someone who built an enterprise SaaS company and sold within three years might be better served to invest in and advise another company doing the same thing. Just as someone who had spent 20 years running companies that were at later stages and taking them to IPO was better at advising later stage startups who maybe weren’t startups any more.
Here’s a fairly common startup story. After finishing a book on Lisp, Paul Graham decides to found a company with Robert Morris. That was Viaweb in 1995 and one of the earliest SaaS startups that hosted online stores - similar to a Shopify today. Viaweb had an investor named Julian Weber, who invested $10,000 in exchange for 10% of the company. Weber gave them invaluable advice and they were acquired by Yahoo! for about $50 million in stock in 1998, becoming the Yahoo Store.
Here’s where the story gets different. 2005 and Graham decides to start doing seed funding for startups, following the model that Weber had established with Viaweb. He and Viaweb co-founders Robert Morris (the guy that wrote the Morris worm) and Trevor Blackwell start Y Combinator, along with Jessica Livingston. They put in $200,000 to invest in companies and with successful investments grew to a few dozen companies a year. They’re different because they pick a lot of technical founders (like themselves) and help the founders find product market fit, finish their solutions, and launch. And doing so helped them bring us Airbnb, Doordash, Reddit, Stripe, Dropbox and countless others.
Notice that many of these firms have funded the same companies. This is because multiple funds investing in the same company helps distribute risk. But also because in an era where we’ve put everything from cars to education to healthcare to innovation on an assembly line, we have an assembly line in companies. We have thousands of angel investors, or humans who put capital to work by investing in companies they find through friends, family, and now portals that connect angels with companies.
We also have incubators, a trend that began in the late 50s in New York when Jo Mancuso opened a warehouse up for small tenants after buying a warehouse to help the town of Batavia. The Batavia Industrial Center provided office supplies, equipment, secretaries, a line of credit, and most importantly advice on building a business. They had made plenty of money on chicken coops and though that maybe helping companies start was a lot like incubating chickens and so incubators were born.
Others started incubating. The concept expanded from local entrepreneurs helping other entrepreneurs and now cities, think tanks, companies, and even universities, offer incubation in their walls. Keep in mind many a University owns a lot of patents developed there and plenty of companies have sprung up to commercialize the intellectual property incubated there. Seeing that and how technology companies needed to move faster we got accelerators like Techstars, founded by David Cohen, Brad Feld, David Brown, and Jared Polis in 2006 out of Boulder, Colorado. They have worked with over 2,500 companies and run a couple of dozen programs. Some of the companies fail by the end of their cohort and yet many like Outreach and Sendgrid grow and become great organizations or get acquired.
The line between incubator and accelerator can be pretty slim today. Many of the earlier companies mentioned are now the more mature venture capital firms. Many have moved to a focus on later stage companies with YC and Techstars investing earlier. They attend the demos of companies being accelerated and invest. And the fact that founding companies and innovating is now on an assembly line, the companies that invest in an A round of funding, which might come after an accelerator, will look to exit in a B round, C round, etc. Or may elect to continue their risk all the way to an acquisition or IPO.
And we have a bevy of investing companies focusing on the much later stages. We have private equity firms and family offices that look to outright own, expand, and either harvest dividends from or sell an asset, or company. We have traditional institutional lenders who provide capital but also invest in companies. We have hedge funds who hedge puts and calls or other derivatives on a variety of asset classes. Each has their sweet spot even if most will opportunistically invest in diverse assets.
Think of the investments made as horizons. The Angel investor might have their shares acquired in order to clean up the cap table, or who owns which parts of a company, in later rounds. This simplifies the shareholder structure as the company is taking on larger institutional investors to sprint towards and IPO or an acquisition.
People like Arthur Rock, Tommy Davis, Tom Perkins, Eugene Kleiner, Doerr, Masayoshi Son, and so many other has proven that they could pick winners. Or did they prove they could help build winners? Let’s remember that investing knowledge and operating experience were as valuable as their capital. Especially when the investments were adjacent to other successes they’d found.
Venture capitalists invested more than $10 billion in 1997. $600 million of that found its way to early-stage startups. But most went to preparing a startup with a product to take it to mass market. Today we pump more money than ever into R&D - and our tax systems support doing so more than ever. And so more than ever, venture money plays a critical role in the life cycle of innovation. Or does venture money play a critical role in the commercialization of innovation? Seed accelerators, startup studios, venture builders, public incubators, venture capital firms, hedge funds, banks - they’d all have a different answer. And they should. Few would stick with an investment like Digital Equipment for as long as ARDC did. And yet few provide over 100% annualized returns like they did.
As we said in the beginning of this episode, wealthy patrons from Pharaohs to governments to industrialists to now venture capitalists have long helped to propel innovation, technology, trade, and intellectual property. We often focus on the technology itself in computing - but without the money the innovation either wouldn’t have been developed or if developed wouldn’t have made it to the mass market and so wouldn’t have had an impact into our productivity or quality of life.
The knowledge that comes with those who provide the money can be seen with irreverence. Taking an innovation to market means market-ing. And sales. Most generations see the previous generations as almost comedic, as we can see in the HBO show Silicon Valley when the cookie cutter industrialized approach goes too far. We can also end up with founders who learn to sell to investors rather than raising capital in the best way possible, selling to paying customers.
But there’s wisdom from previous generations when offered and taken appropriately. A coachable founder with a vision that matches the coaching and a great product that can scale is the best investment that can be made. Because that’s where innovation can change the world.
Jim Leiterman, Atari Research Group
James and John discuss eBay Finds: PowerBook 180 with rare case, Apple jewelry set, and framed Macintosh SE logic board. They are joined by Paul Hagstrom who previews KansasFest, and news includes Woz on Cameo, Apple hardware calendar, early portrait Macintosh, and new Throwboy blankets.
Brenda Laurel, Atari Research
Dr. Brenda Laurel worked at Atari from 1980 through 1984. She began as software specialist for educational applications then soon became manager of software strategy for the home computer division. In mid-1982, she joined Atari Corporate Research at the Sunnyvale research laboratory, where she worked with Alan Kay.
After Atari, she worked at Activision as director of software development. Later she founded Purple Moon, a software company focused on creating games for young girls; and co-founded Telepresence Research, a company focused on first-person media and virtual reality.
This interview took place on July 15, 2021. Check the show notes for links to articles she wrote for Atari Connection magazine; her doctoral dissertation, "Toward the Design of a Computer-Based Interactive Fantasy System"; scans of memos on the subject of interactive fantasy that she wrote while at Atari Research; and more.
Brenda's web site
Steven Levy on why Macintosh developers aren’t scared of Claris, the software company backed by Apple Computer.
Original text from Macworld Magazine, June 1992.
ClarisWorks and other seemingly Macintosh-only products did indeed ship on Windows.
Author Albert Cory joins the podcast in this episode to talk about his new book, Inventing the Future. Inventing the Future was a breath of fresh air from an inspirational time and person. Other books have told the story of how the big names in computing were able to commercialize many of the innovations that came out of Xerox PARC. But Inventing the Future adds a really personal layer that ties in the culture of the day (music, food, geography, and even interpersonal relationships) to what was happening in computing - that within a couple of decades would wildly change how we live our lives.
We’re lucky he made the time to discuss his take on a big evolution in modern technology through the lens of historical fiction. I would absolutely recommend the book to academics and geeks and just anyone looking to expand their minds. And we look forward to having him on again!
Roy Allen opened his first root beer stand in 1919, in Lodi, California. He’d bought a recipe for root beer and boy, it sure was a hit. He brought in people to help. One was Frank Wright, who would become a partner in the endeavor and they’d change the name to A&W Root Beer, for their names, and open a restaurant in 1923 in Sacramento, California. Allen bought Wright back out in 1925, but kept the name. Having paid for the root beer license he decided to franchise out the use of that - but let’s not call that the first fast food chain just yet. After all, it was just a license to make root beer just like he’d bought the recipe all those years ago.
A&W’s Allen sold the company in 1950 to retire. The franchise agreements moved from a cash payment to royalties. But after Allen the ownership of the company bounced around until it landed with United Fruit which would become United Brands, who took A&W to the masses and the root beer company was split from the restaurant chain with the chain eventually owned by Yum! Brands now nearly 1,000 locations and over $300M in revenues.
As A&W franchised, some experimented with other franchising options or with not going that route at all. Around the same time Wright opened his first stand, Walt Anderson was running a few food stands around Witchita. He met up with Billy Ingram and in 1921 they opened the first White Castle, putting in $700 of their own money. By 1927 they expanded out to Indianapolis. As is often the case, the original cook with the concept sold out his part of the business in 1933 when they moved their headquarters to Columbus, Ohio and the Ingram family expanded all over the United States. Many a fast food chain is franchised but White Castle has stayed family owned and operates profitably not taking on debt to grow.
Kentucky Fried Chicken
KFC îs fried chicken. They sell some other stuff I guess. They were started by Harland Sanders in 1930 but as we see with a lot of these they didn’t start franchising until after the war. His big hack was to realize he needed to cook chicken faster to serve more customers and so he converted a pressure cooker into a pressure fryer, completely revolutionizing how food is fried.
He perfected his original recipe in 1940 and by 1952 was able to parlay the success of his early success into franchising out what is now the second largest fast food chain in the world. But the largest is McDonald’s.
1940 comes around and Richard and Maurice McDonald open a little restaurant called McDonalds. It was a drive-up barbecue joint in San Bernadino. But drive-in restaurants were getting competitive and while looking back at the business, they realized that four fifths of the sales were hamburgers. So they shut down for a bit and got rid of the car hops that were popular at the time, simplified the menu and trimmed out everything they could - getting down to less than 10 items on the menu.
They were able to get prices down to 15 cent hamburgers using something they called the Speedee Service System. That was an assembly-line of food preparation that became the standard in the fast food industry over the next few decades. They also looked at industrial equipment and used that to add french fries and shakes, which finally unlocked an explosion of sales and profits doubled.
But then the milkshake mixer salesman payed a visit to them in San Bernadino to see why the brothers need 8 of his mixers and was amazed to find they were, in fact, cranking out 48 shakes at a time with them. The assembly-line opened his eyes and he bought the rights to franchise the McDonalds concept opening his first one in Des Plaines, Illinois. One of the best growth hacks for any company is just to have an amazing sales and marketing arm. OK, so not a hack but just good business. And Ray Kroc will go down as one of the greatest. From those humble beginnings selling milkshake mixers he moved from licensing to buying the company outright for $2.7 million dollars in 1961.
Another growth hack was to realize, thanks to a former VP at Tastee-Freez, that owning the real estate brought yet another revenue stream. A low deposit and a 20% or higher increase in the monthly spend would grow into a nearly 38 billion dollar revenue stream.
The highway system was paying dividends to the economy. People were moving out to the suburbs. Cars were shipping in the highest volumes ever. They added the filet-o-fish and were exploding internationally in the 60s and 70s and now sitting on over 39,000 stores with about a $175 billion market cap with over $5 billion dollars in revenue.
Diners, Drive-ins, and Dives
Those post-war years were good to fast food. Anyone that’s been to a 50s themed restaurant can see the car culture on display and drive-ins were certainly a part of that. People were living their lives at a new pace to match the speed of those cars and it was a golden age of growth in the United States. The computer industry was growing right along with those diners, drive-ins, and dives.
One company that started before World War II and grew fast was Dairy Queen, started in 1940 by John Fremont McCullough. He’d invented soft-serve ice cream in 1938 and opened the first Dairy Queen in Joliet, Illinois with his friend Sherb Noble, who’d been selling his soft-serve ice cream out of his shop for a couple of years. During those post-war 1950s explosive years they introduced the Dilly Bar and have now expanded to 6,800 locations around the world.
William Rosenberg opened a little coffee shop in in Quincy, Massachusetts. As with the others in this story, he parlayed quick successes and started to sell franchises in 1955 and Dunkin’ Donuts grew to 12,400 locations.
In-N-Out Burger started in 1948 as well, by Harry and Esther Snyder and while they’ve only expanded around the west coast of the US, they’ve grown to around 350 locations and stay family owned.
Pizza Hut was started in 1958 in Wichita, Kanas. While it was more of a restaurant for a long time, it’s now owned by Yum! Brands and operates well over 18,000 locations. Yum! Also owns KFC and Taco Bell. Glen Bell served as a cook in World War II and moved to San Bernardino to open a drive-in hot dog stand in 1948. He sold it and started a taco stand, selling them for 19 cents a piece, expanding to three locations by 1955 and went serial entrepreneur - selling those locations and opening four new ones he called El Tacos down in Long Beach. He sold that to his partner in 1962 and started his first Taco Bell, finally ready to start selling franchises in 1964 and grew it to 100 restaurants by 1967.
They took Taco Bell public in 1970 when they had 325 locations. And Pepsi bought the 868 location in 1978 for $125 million in stock, eventually spinning the food business off to what is now called Yum! Brands and co-branding with cousin restaurants in that portfolio - Pizza Hut and Long John Silver’s. I haven’t been to a Long John Silver’s since I was a kid but they still have over a thousand locations and date back to a hamburger stand started in 1929 that over the years pivoted to a roast beef sandwich shop and pivoting many times until landing on the fish and chips concept in 1969.
The Impact of Computing
It’s hard to imagine that any of these companies could have grown the way they did without more than an assembly-line of human automation. Mechanical cash registers had been around since the Civil War in the United States, with early patents filed in 1883 by Charles Kettering and James Ritty. Arguably the abacus and counting frame goes back way further but the Ritty Model I patent was sparked the interest of Jacob Eckert who bought the patent, added some features and took on $10,000 in debt to take the cash register to market, forming National Manufacturing Company. That became National Cash Register still a more than 6 billion dollar market cap company.
But the growth of IBM and other computing companies, the release of semiconductors, and the miniaturization and dropping costs of printed circuit boards helped lead to the advent of electronic cash registers. After all those are just purpose-built computers. IBM introduced the first point of sale system in 1973, bringing that cash register into the digital age. Suddenly a cash register could be in the front as a simplified terminal to send print outs or information to a screen in the back.
Those IBM 3650s evolved to the first use of peer-to-peer client-server technology and ended up in Dillard’s in 1974. That same year McDonald’s had William Brobeck and Associates develop a microprocessor-based terminal. It was based on the Intel 8008 chip and used a simple push-button device to allow cashiers to enter orders. This gave us a queue of orders being sent by terminals in the front. And we got touchscreens registers in 1986, running on the Atari 520ST, with IBM introducing a 486-based system running on FlexOS.
As we moved into the 90s, fast food chains were spreading fast and the way we payed for goods was starting to change. All these electronic registers could suddenly send the amount owed over an electronic link to a credit card processing machine.
John Biggins launched the Charg-it card in 1946 and it spread to Franklin National Bank a few years later. Diners Club Card picked up on the trend and launched the Diners Club Card in 1950, growing to 20,000 cardholders in 1951. American Express came along in 1958 with their card and in just five years grew to a million cards. Bank of America released their BankAmericard in 1958, which became the first general-purpose credit card. They started in California and went national in the first ten years. That would evolve into Visa by 1966 and by 1966 we got MasterCard as well. THat’s also the same year the Barclaycard brought credit cards outside the US for the first time, showing up first in England. Then Carte bleue in 67 in France and the Eurocard as a collaboration between the Wallenberg family and Interbank in 1968 to serve the rest of Europe.
Those spread and by the 90s we had enough people using them to reach a critical mass where fast food needed to take them as well. Whataburger and Carl’s Jr added the option in 1989, Arby’s in 1990, and while slower to adopt taking cards, McDonald’s finally did so in 2002. We were well on our way to becoming a cashless society.
And the rise of the PC led to POS systems moving a little down-market and systems from and others like Aloha, designed in 1998 (now owned by NCR). And lots of other brands of devices as well as home-brewed tooling from large vendors.
And computers helped revolutionize the entire organization. Chains could automate supply lines to stores with computerized supply chain management. Desktop computers also led to management functions being computerized in the back office, like scheduling and time clocks and so less managers were needed. That was happening all over post-War America by the 90s.
In that era after World War II people were fascinated with having the same experiences over and over - and having them be identical. Think about it, before the war life was slower and every meal required work. After it was fast and the food always came out hot and felt like a suburban life, wherever you were. Even when that white flight was destroying city centers and the homogeneity leading to further centralized organizations dividing communities.
People flocked to open these restaurants. They could make money, it was easier to get a loan to open a store with a known brand, there were high profit margins, and in a lot of cases, there was a higher chance of success than many other industries. This leads to even more homogeneity. That rang true for other types of franchising on the rise as well. Fast food became a harbinger of things to come and indicative of other business trends as well.
These days we think of high fructose corn syrup, fried food, and GMOs when we think of fast food. And that certainly led to the rise. People who eat fast food want that. Following the first wave of fast food we got other brands rising as well. Arby’s was founded in 1964, Subway in 1965, Wendy’s in 1969, Jack in the Box in 1961, Chick-fil-A in 1946, just a few miles from where I was born. And newer chains like Quiznos in 1981, Jimmy John’s in 1983, and Chipotle in 1993. These touch other areas of the market focusing on hotter, faster, or spicier.
From the burger craze to the drive-in craze to just plain fast, fast food has been with us since long before anyone listening to this episode was born and is likely to continue on long after we’re gone. Love it or hate it, it’s a common go-to when we’re working on systems - especially far from home.
And the industry continues to evolve. A barrier to opening any type of retail chain was once the point of sale system. Another was finding a way to accept credit cards. Stripe emerged to help with the credit cards and a cadre of tablet and app-based solutions for the iPhone, Android, and tablets emerged to help make taking credit cards simple for new businesses. A lot of the development was once put into upmarket solutions but these days downmarket is so much more approachable. And various fraud prevention machine learning algorithms and chip and pin technologies makes taking a credit card for a simple transaction safer than ever.
The fast food and retail in general continues to evolve. The next evolution seems to be self-service. This is well underway but a number of companies are looking at kiosks to take orders and all those cashiers might find RFID tags as another threat to their jobs. If a machine can see what’s in a cart on the way out of a store there’s no need for cashiers. Here, we see the digitization as one wave of technology but given the inexpensive cost of labor we are just now seeing the cost of the technology come down to where it’s cheaper. Much as the cost of clockworks and then industrialization caused first the displacement of Roman slave labor and then workers in factories. Been to a parking ramp recently? That’s a controlled enough environment where the people were some of the first to be replaced with simple computers that processed first magnetic stripes and now license plates using simple character recognition technology.
Another revolution that has already begun is how we get the food. Grubhub launched in 2004, we got Postmates in 2011, and DoorDash came in 2013 to make it where we don’t even have to leave the house to get our burger fix. We can just open an app, use our finger print to check out, and have items show up at our homes often in less time than if we’d of gone to pick it up. And given that they have a lot of drivers and know exactly where they are, Uber attempted to merge with DoorDash in 2019, but that’s fine because they’d already launched Uber Eats in 2014. But DoorDash has about half that market at $2.9 billion in revenues for 2020 and that’s just with 18 million users - still less than 10% of US households. I guess that’s why DoorDash enjoys a nearly $60 billion market cap. We are in an era of technology empires.
And yet McDonald’s is only worth about three times what DoorDash is worth and guess which one is growing faster.
Empires come and go.
The ability to manage an empire that scales larger than the technology and communications capabilities allows for was a downfall of many an empire - from Rome to Poland to the Russian Czarist empire. Each was profoundly changed by splitting up the empire as with Rome, becoming a pawn between neighboring empires, or even the development of an entirely new system of governance, as with Russia. Fast food employs four and a half million people in the US today, with another almost 10 million people employed globally. About half of those are adults. An industry that’s grown from revenues of just $6 billion to a half trillion dollar industry since just 1970. And those employees often make minimum wage. Think about this, that’s over twice the number of slaves as there were in the Roman Empire. Many of whom rose up to conquer the empire.
And the name of the game is automation. Has been since that McDonald’s Speedee Service System that enthralled Ray Kroc. But the human labor will some day soon be drastically cut. Just as the McDonald brothers cut car hops from their roster all those years ago. And that domino will knock down others in every establishment we walk into to pay for goods. Probably not in the next 5 years, but certainly in my lifetime. Job displacement due to technology is nothing new. It goes back past the Romans. But it is accelerating faster than at other points in history. And you have to wonder what kinds of socio, political, and economical repercussions we’ll have. Add in other changes around the world and the next few decades will be interesting to watch.
The Internet is not a simple story to tell. In fact, every sentence here is worthy of an episode if not a few.
Many would claim the Internet began back in 1969 when the first node of the ARPAnet went online. That was the year we got the first color pictures of earthen from Apollo 10 and the year Nixon announced the US was leaving Vietnam. It was also the year of Stonewall, the moon landing, the Manson murders, and Woodstock. A lot was about to change.
But maybe the story of the Internet starts before that, when the basic research to network computers began as a means of networking nuclear missile sites with fault-tolerant connections in the event of, well, nuclear war. Or the Internet began when a T3 backbone was built to host all the datas. Or the Internet began with the telegraph, when the first data was sent over electronic current. Or maybe the Internet began when the Chinese used fires to send messages across the Great Wall of China. Or maybe the Internet began when drums sent messages over long distances in ancient Africa, like early forms of packets flowing over Wi-Fi-esque sound waves.
We need to make complex stories simpler in order to teach them, so if the first node of the ARPAnet in 1969 is where this journey should end, feel free to stop here. To dig in a little deeper, though, that ARPAnet was just one of many networks that would merge into an interconnected network of networks. We had dialup providers like CompuServe, America Online, and even The WELL. We had regional timesharing networks like the DTSS out of Dartmouth University and PLATO out of the University of Illinois, Champaign-Urbana. We had corporate time sharing networks and systems. Each competed or coexisted or took time from others or pushed more people to others through their evolutions. Many used their own custom protocols for connectivity. But most were walled gardens, unable to communicate with the others.
So if the story is more complicated than that the ARPAnet was the ancestor to the Internet, why is that the story we hear? Let’s start that journey with a memo that we did an episode on called “Memorandum For Members and Affiliates of the Intergalactic Computer Network” sent by JCR Licklider in 1963 and can be considered the allspark that lit the bonfire called The ARPANet. Which isn’t exactly the Internet but isn’t not. In that memo, Lick proposed a network of computers available to research scientists of the early 60s. Scientists from computing centers that would evolve into supercomputing centers and then a network open to the world, even our phones, televisions, and watches.
It took a few years, but eventually ARPA brought in Larry Roberts, and by late 1968 ARPA awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS were computers that connected a number of sites and routed traffic. The first IMP, which might be thought of more as a network interface card today, went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET.
The first node of ARPAnet went online at the University of California, Los Angeles (UCLA for short). It grew as leased lines and more IMPs became more available. As they grew, the early computer scientists realized that each site had different computers running various and random stacks of applications and different operating systems. So we needed to standardize certain aspects connectivity between different computers.
Given that UCLA was the first site to come online, Steve Crocker from there began organizing notes about protocols and how systems connected with one another in what they called RFCs, or Request for Comments. That series of notes was then managed by a team that included Elizabeth (Jake) Feinler from Stanford once Doug Engelbart’s project on the “Augmentation of Human Intellect” at Stanford Research Institute (SRI) became the second node to go online. SRI developed a Network Information Center, where Feinler maintained a list of host names (which evolved into the hosts file) and a list of address mappings which would later evolve into the functions of Internic which would be turned over to the US Department of Commerce when the number of devices connected to the Internet exploded. Feinler and Jon Postel from UCLA would maintain those though, until his death 28 years later and those RFCs include everything from opening terminal connections into machines to file sharing to addressing and now any place where the networking needs to become a standard.
The development of many of those early protocols that made computers useful over a network were also being funded by ARPA. They funded a number of projects to build tools that enabled the sharing of data, like file sharing and some advancements were loosely connected by people just doing things to make them useful and so by 1971 we also had email. But all those protocols needed to flow over a common form of connectivity that was scalable. Leonard Kleinrock, Paul Baran, and Donald Davies were independently investigating packet switching and Roberts brought Kleinrock into the project as he was at UCLA. Bob Kahn entered the picture in 1972. He would team up with Vint Cerf from Stanford who came up with encapsulation and so they would define the protocol that underlies the Internet, TCP/IP. By 1974 Vint Cerf and Bob Kahn wrote RFC 675 where they coined the term internet as shorthand for internetwork. The number of RFCs was exploding as was the number of nodes. The University of California Santa Barbara then the University of Utah to connect Ivan Sutherland’s work. The network was national when BBN connected to it in 1970. Now there were 13 IMPs and by 1971, 18, then 29 in 72 and 40 in 73. Once the need arose, Kleinrock would go on to work with Farouk Kamoun to develop the hierarchical routing theories in the late 70s.
By 1976, ARPA became DARPA. The network grew to 213 hosts in 1981 and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved fully over to TCP/IP. And so TCP/IP, or Transport Control Protocol/Internet Protocol is the most dominant networking protocol on the planet. It was written to help improve performance on the ARPAnet with the ingenious idea to encapsulate traffic. But in the 80s, it was just for researchers still. That is, until NSFNet was launched by the National Science Foundation in 1986.
And it was international, with the University College of London connecting in 1971, which would go on to inspire a British research network called JANET that built their own set of protocols called the Colored Book protocols. And the Norwegian Seismic Array connected over satellite in 1973. So networks were forming all over the place, often just time sharing networks where people dialed into a single computer.
Another networking project going on at the time that was also getting funding from ARPA as well as the Air Force was PLATO. Out of the University of Illinois, was meant for teaching and began on a mainframe in 1960. But by the time ARPAnet was growing PLATO was on version IV and running on a CDC Cyber. The time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. In fact, there were multiple CDC Cybers that could communicate with one another. And many on ARPAnet also used PLATO, cross pollinating non-defense backed academia with a number of academic institutions.
The defense backing couldn’t last forever. The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Bob Taylor split to go work at Xerox, where he was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS and people like Bob Metcalfe who brought us Ethernet and better collusion detection. Metcalfe would go on to found 3Com a great switch and network interface company during the rise of the Internet.
But there were plenty of people who could see the productivity gains from ARPAnet and didn’t want it to disappear. And the National Science Foundation (NSF) was flush with cash. And the ARPA crew was increasingly aware of non-defense oriented use of the system. So the NSF started up a little project called CSNET in 1981 so the growing number of supercomputers could be shared between all the research universities. It was free for universities that could get connected and from 1985 to 1993 NSFNET, surged from 2,000 users to 2,000,000 users. Paul Mockapetris made the Internet easier than when it was an academic-only network by developing the Domain Name System, or DNS, in 1983. That’s how we can call up remote computers by names rather than IP addresses. And of course DNS was yet another of the protocols in Postel at UCLAs list of protocol standards, which by 1986 after the selection of TCP/IP for NSFnet, would become the standardization body known as the IETF, or Internet Engineering Task Force for short. Maintaining a set of protocols that all vendors needed to work with was one of the best growth hacks ever. No vendor could have kept up with demand with a 1,000x growth in such a small number of years.
NSFNet started with six nodes in 1985, connected by LSI-11 Fuzzball routers and quickly outgrew that backbone. They put it out to bid and Merit Network won out in a partnership between MCI, the State of Michigan, and IBM. Merit had begun before the first ARPAnet connections went online as a collaborative effort by Michigan State University, Wayne State University, and the University of Michigan. They’d been connecting their own machines since 1971 and had implemented TCP/IP and bridged to ARPANET. The money was getting bigger, they got $39 million from NSF to build what would emerge as the commercial Internet.
They launched in 1987 with 13 sites over 14 lines. By 1988 they’d gone nationwide going from a 56k backbone to a T1 and then 14 T1s. But the growth was too fast for even that. They re-engineered and by 1990 planned to add T3 lines running in parallel with the T1s for a time. By 1991 there were 16 backbones with traffic and users growing by an astounding 20% per month.
Vint Cerf ended up at MCI where he helped lobby for the privatization of the internet and helped found the Internet Society in 1988. The lobby worked and led to the the Scientific and Advanced-Technology Act in 1992. Before that, use of NSFNET was supposed to be for research and now it could expand to non-research and education uses. This allowed NSF to bring on even more nodes. And so by 1993 it was clear that this was growing beyond what a governmental institution whose charge was science could justify as “research” for any longer.
By 1994, Vent Cerf was designing the architecture and building the teams that would build the commercial internet backbone at MCI. And so NSFNET began the process of unloading the backbone and helped the world develop the commercial Internet by sprinkling a little money and know-how throughout the telecommunications industry, which was about to explode. NSFNET went offline in 1995 but by then there were networks in England, South Korea, Japan, Africa, and CERN was connected to NSFNET over TCP/IP. And Cisco was selling routers that would fuel an explosion internationally. There was a war of standards and yet over time we settled on TCP/IP as THE standard.
And those were just some of the nets. The Internet is really not just NSFNET or ARPANET but a combination of a lot of nets. At the time there were a lot of time sharing computers that people could dial into and following the release of the Altair, there was a rapidly growing personal computer market with modems becoming more and more approachable towards the end of the 1970s. You see, we talked about these larger networks but not hardware.
The first modulator demodulator, or modem, was the Bell 101 dataset, which had been invented all the way back in 1958, loosely based on a previous model developed to manage SAGE computers. But the transfer rate, or baud, had stopped being improved upon at 300 for almost 20 years and not much had changed. That is, until Hayes Hayes Microcomputer Products released a modem designed to run on the Altair 8800 S-100 bus in 1978. Personal computers could talk to one another.
And one of those Altair owners was Ward Christensen met Randy Suess at the Chicago Area Computer Hobbyists’ Exchange and the two of them had this weird idea. Have a computer host a bulletin board on one of their computers. People could dial into it and discuss their Altair computers when it snowed too much to meet in person for their club. They started writing a little code and before you know it we had a tool they called Computerized Bulletin Board System software, or CBBS. The software and more importantly, the idea of a BBS spread like wildfire right along with the Atari, TRS-80, Commodores and Apple computers that were igniting the personal computing revolution.
The number of nodes grew and as people started playing games, the speed of those modems jumped up with the v.32 standard hitting 9600 baud in 84, and over 25k in the early 90s. By the early 1980s, we got Fidonet, which was a network of Bulletin Board Systems and by the early 90s we had 25,000 BBS’s. And other nets had been on the rise. And these were commercial ventures.
The largest of those dial-up providers was America Online, or AOL. AOL began in 1985 and like most of the other dial-up providers of the day were there to connect people to a computer they hosted, like a timesharing system, and give access to fun things. Games, news, stocks, movie reviews, chatting with your friends, etc. There was also CompuServe, The Well, PSINet, Netcom, Usenet, Alternate, and many others. Some started to communicate with one another with the rise of the Metropolitan Area Exchanges who got an NSF grant to establish switched ethernet exchanges and the Commercial Internet Exchange in 1991, established by PSINet, UUNet, and CERFnet out of California.
Those slowly moved over to the Internet and even AOL got connected to the Internet in 1989 and thus the dial-up providers went from effectively being timesharing systems to Internet Service Providers as more and more people expanded their horizons away from the walled garden of the time sharing world and towards the Internet. The number of BBS systems started to wind down. All these IP addresses couldn’t be managed easily and so IANA evolved out of being managed by contracts from research universities to DARPA and then to IANA as a part of ICANN and eventually the development of Regional Internet Registries so AFRINIC could serve Africa, ARIN could serve Antarctica, Canada, the Caribbean, and the US, APNIC could serve South, East, and Southeast Asia as well as Oceania LACNIC could serve Latin America and RIPE NCC could serve Europe, Central Asia, and West Asia. By the 90s the Cold War was winding down (temporarily at least) so they even added Russia to RIPE NCC.
And so using tools like WinSOCK any old person could get on the Internet by dialing up. Modems for dial-ups transitioned to DSL and cable modems. We got the emergence of fiber with regional centers and even national FiOS connections. And because of all the hard work of all of these people and the money dumped into it by the various governments and research agencies, life is pretty darn good.
When we think of the Internet today we think of this interconnected web of endpoints and content that is all available. Much of that was made possible by the development of the World Wide Web by Tim Berners-Lee in in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. Netscape’s IPO is probably one of the most pivotal moments where investors from around the world realized that all of this research and tech was built on standards and while there were some patents, the standards were freely useable by anyone.
Those standards let to an explosion of companies like Yahoo! from a couple of Stanford grad students and Amazon, started by a young hedge fund Vice President named Jeff Bezos who noticed all the money pouring into these companies and went off to do his own thing in 1994. The companies that arose to create and commercialize content and ideas to bring every industry online was ferocious.
And there were the researchers still writing the standards and even commercial interests helping with that. And there were open source contributors who helped make some of those standards easier to implement by regular old humans. And tools for those who build tools. And from there the Internet became what we think of today. Quicker and quicker connections and more and more productivity gains, a better quality of life, better telemetry into all aspects of our lives and with the miniaturization of devices to support wearables that even extends to our bodies. Yet still sitting on the same fundamental building blocks as before. The IANA functions to manage IP addressing has moved to the private sector as have many an onramp to the Internet. Especially as internet access has become more ubiquitous and we are entering into the era of 5g connectivity.
And it continues to evolve as we pivot due to new needs and threats a globally connected world represent. IPv6, various secure DNS options, options for spam and phishing, and dealing with the equality gaps surfaced by our new online world. We have disinformation so sometimes we might wonder what’s real and what isn’t. After all, any old person can create a web site that looks legit and put whatever they want on it. Who’s to say what reality is other than what we want it to be. This was pretty much what Morpheus was offering with his choices of pills in the Matrix. But underneath it all, there’s history. And it’s a history as complicated as unraveling the meaning of an increasingly digital world. And it is wonderful and frightening and lovely and dangerous and true and false and destroying the world and saving the world all at the same time.
This episode is pretty simplistic and many of the aspects we cover have entire episodes of the podcast dedicated to them. From the history of Amazon to Bob Taylor to AOL to the IETF to DNS and even Network Time Protocol. It’s a story that leaves people out necessarily; otherwise scope creep would go all the way back to to include Volta and the constant electrical current humanity received with the battery. But hey, we also have an episode on that! And many an advance has plenty of books and scholarly works dedicated to it - all the way back to the first known computer (in the form of clockwork), the Antikythera Device out of Ancient Greece. Heck even Louis Gerschner deserves a mention for selling IBM’s stake in all this to focus on things that kept the company going, not moonshots.
But I’d like to dedicate this episode to everyone not mentioned due to trying to tell a story of emergent networks. Just because they were growing fast and our modern infrastructure was becoming more and more deterministic doesn’t mean that whether it was writing a text editor or helping fund or pushing paper or writing specs or selling network services or getting zapped while trying to figure out how to move current that there aren’t so, so, so many people that are a part of this story. Each with their own story to be told. As we round the corner into the third season of the podcast we’ll start having more guests. If you have a story and would like to join us use the email button on thehistoryofcomputing.net to drop us a line. We’d love to chat!
COBOL! Just its name can strike terror in the hearts of programmers. This language is old, it follows its own strange syntax, and somehow still runs the world of finance and government. But is COBOL really as bad as it's made out to be? Today we are talking a look at the languages origins and how it's become isolated from early every other programming language in common use. Perhaps most importantly for me, we will see is Grace Hopper should really be blamed for unleashing this beast onto mainframes.
https://archive.org/details/historyofprogram0000hist - History of Programming Languages, contains Sammet's account of CODASYL
https://archive.org/details/bitsavers_codasylCOB_6843924/ - COBOL 60 Manual
https://sci-hub.do/10.1016/0066-4138%2860%2990042-2 - FLOW-MATIC/MATH-MATIC usage paper
James and John discuss eBay Finds: Mac Software Sampler catalog, AudioVision monitor, and MacNifty. They talk to Chris about Macintosh Garden, and news includes the Motion Design Mac commercial, Vestaboard, and MacSD.
Nearly everything is fine in moderation. Plastics exploded as an industry in the post World War II boom of the 50s and on - but goes back far further. A plastic is a category of materials called a polymer. These are materials comprised of long chains of molecules that can be easily found in nature because cellulose, the cellular walls of plants, comes in many forms. But while the word plastics comes from easily pliable materials, we don’t usually think of plant-based products as plastics. Instead, we think of the synthetic polymers.
But documented uses go back thousands of years, especially with early uses of natural rubbers, milk proteins, gums, and shellacs. But as we rounded the corner into the mid-1800s with the rise of chemistry things picked up steam. That’s when Charles Goodyear wanted to keep tires from popping and so discovered vulcanization as a means to treat rubber. Vulcanization is when rubber is heated and mixed with other chemicals like sulphur.
Then in 1869 John Wesley Hyatt looked for an alternative to natural ivory for things like billiards. He found that cotton fibers could be treated with camphor, which came from the waxy wood of camphor laurels. The substance could be shaped, dried, and then come off as most anything nature produced. When Wesley innovated plastics most camphor was extracted from trees, but today most camphor is synthetically produced from petroleum-based products, further freeing humans from needing natural materials to produce goods. Not only could we skip killing elephants but we could avoid chopping down forests to meet our needs for goods.
Leo Baekeland gave us Bakelite in 1907. By then we were using other materials and the hunt was on for all kinds of materials. Shellac had been used as a moisture sealant for centuries and came from the female lac bugs in trees around India but could also be used to insulate electrical components. Baekeland created a phenol and formaldehyde solution he called Novolak but as with the advent of steel realized that he could change the temperature and how much pressure was applied to the solution that he could make it harder and more moldable - thus Bakelite became the first fully synthetic polymer.
Hermann Staudinger started doing more of the academic research to explain why these reactions were happening. In 1920, he wrote a paper that looked at rubber, starch, and other polymers, explaining how their long chains of molecular units were linked by covalent bonds. Thus their high molecular weights. He would go on to collaborate with his wife Magda Voita, who was a bonanist and his polymer theories proven. And so plastics went from experimentation to science.
Scientists and experimenters alike continued to investigate uses and by 1925 there was even a magazine called Plastics. They could add filler to Bakelite and create colored plastics for all kinds of uses and started molding jewelry, gears, and other trinkets. They could heat it to 300 degrees and then inject it into molds. And so plastic manufacturing was born. As with many of the things we interact with in our modern world, use grew through the decades and there were other industries that started to merge, evolve, and diverge.
Éleuthère Irénée du Pont had worked with gunpowder in France and his family immigrated to the United States after the French Revolution. He’d worked with chemist Antoine Lavoisier while a student and started producing gunpowder in the early 1800s. That company, which evolved into the modern DuPont, always excelled in various materials sciences and through the 1920s also focused on a number of polymers. One of their employees, Wallace Carothers, invented neoprene and so gave us our first super polymer in 1928. He would go on to invent nylon as a synthetic form of silk in 1935. DuPont also brought us Teflon and insecticides in 1935.
Acrylic acid went back to the mid-1800s but as people were experimenting with combining chemicals around the same time we saw British chemists John Crawford and Rowland Hill and independently German Otto Röhm develop products based on polymathy methacrylate. Here, they were creating clear, hard plastic to be used like glass. The Brits called theirs Perspex and the Germans called theirs Plexiglas when they went to market, with our friends back at DuPont creating yet another called Lucite.
The period between World War I and World War II saw advancements in nearly every science - from mechanical computing to early electrical switching and of course, plastics. The Great Depression saw a slow-down in the advancements but World War II and some of the basic research happening around the world caused an explosion as governments dumped money into build-ups. That’s when DuPont cranked out parachutes and tires and even got involved in building the Savannah Hanford plutonium plant as a part of the Manhattan Project. This took them away from things like nylon, which led to riots. We were clearly in the era of synthetics used in clothing.
Leading up to the war and beyond, every supply chain of natural goods got constrained. And so synthetic replacements for these were being heavily researched and new uses were being discovered all over the place. Add in assembly lines and we were pumping out things to bring joy or improve lives at a constant clip. BASF had been making dyes since the 1860s but chemicals are chemicals and had developed polystyrene in the 1930s and continued to grow and benefit from both licensing and developing other materials like Styropor insulating foam.
Dow Chemical had been founded in the 1800s by Herbert Henry Dow, but became an important part of the supply chain for the growing synthetics businesses, working with Corning to produce silicones and producing styrene and magnesium for light parts for aircraft. They too would help in nuclear developments, managing the Rocky Flats plutonium triggers plant and then napalm, Agent Orange, breast implants, plastic bottles, and anything else we could mix chemicals with. Expanded polystyrene led to plastics in cups, packaging, and anything else.
By the 60s we were fully in a synthetic world. A great quote from 1967’s “The Graduate” was “I want to say one word to you. Just one word. Are you listening? Plastics.” The future was here. And much of that future involved injection molding machines, now more and more common. Many a mainframe was encased in metal but with hard plastics we could build faceplates out of plastic. The IBM mainframes had lots of blinking lights recessed into holes in plastic with metal switches sticking out. Turns out people get shocked less when the whole thing isn’t metal.
The minicomputers were smaller but by the time of the PDP-11 there were plastic toggles and a plastic front on the chassis. The Altair 8800 ended up looking a lot like that, but bringing that technology to the hobbyist. By the time the personal computer started to go mainstream, the full case was made of injection molding.
The things that went inside computers were increasingly plastic as well. Going back to the early days of mechanical computing, gears were made out of metal. But tubes were often mounted on circuits screwed to wooden boards. Albert Hanson had worked on foil conductors that were laminated to insulating boards going back to 1903 but Charles Ducas patented electroplating circuit patterns in 1927 and Austrian Paul Eisler invented printed circuits for radio sets in the mid-1930s. John Sargrove then figured out he could spray metal onto plastic boards made of Bakelite in the late 1930s and uses expanded to proximity fuzes in World War II and then Motorola helped bring them into broader consumer electronics in the early 1950s.
Printed circuit boards then moved to screen printing metallic paint onto various surfaces and Harry Rubinstein patented printing components, which helped pave the way for integrated circuits. Board lamination and etching was added to the process and conductive inks used in the creation might be etched copper, plated substrates or even silver inks as are used in RFID tags. We’ve learned over time to make things easier and with more precise machinery we were able to build smaller and smaller boards, chips, and eventually 3d printed electronics - even the Circuit Scribe to draw circuits.
Doug Engelbart’s first mouse was wood but by the time Steve Jobs insisted they be mass produceable they’d been plastic for Englebart and then the Alto. Computer keyboards had evolved out of the flexowriter and so become plastic as well. Even the springs that caused keys to bounce back up eventually replaced with plastic and rubberized materials in different configurations.
Plastic is great for insulating electronics, they are poor conductors of heat, they’re light, they’re easy to mold, they’re hardy, synthetics require less than 5% of the oil we use, and they’re recyclable. Silicone, another polymer, is a term coined by the English chemist F.S. Kipping in 1901. His academic work while at University College, Nottingham would kickstart the synthetic rubber and silicone lubricant industries. But that’s not silicon. That’s an element and a tetravalent metalloid at that. Silicon was discovered in 1787 by Antoine Lavoisier. Yup the same guy that taught Du Pont. While William Shockley started off with germanium and silicon when he was inventing the transistor, it was Jack Kilby and Robert Noyce who realized how well it acted as an insulator or a semiconductor it ended up used in what we now think of as the microchip. But again, that’s not a plastic…
Plastic of course has its drawbacks. Especially since we don’t consume plastics in moderation. It takes 400 to a thousand years do decompose many plastics. The rampant use in every aspect of our lives has led to animals dying after eating plastic, or getting caught in islands of it as plastic is all over the oceans and other waterways around the world. That’s 5 and a quarter trillion pieces of plastic in the ocean that weighs a combined 270,000 tons with another 8 million pieces flowing in there each and every day. In short, the overuse of plastics is hurting our environment. Or at least our inability to control our rampant consumerism is leading to their overuse. They do melt at low temperatures, which can work as a good or bad thing. When they do, they can release hazardous fumes like PCBs and dioxins. Due to many of the chemical compounds they often rely on fossil fuels and so are derived from non-renewable resources. But they’re affordable and represent a trillion dollar industry. And we can all do better at recycling - which of course requires energy and those bonds break down over time so we can’t recycle forever. Oh and the byproducts from the creation of products is downright toxic.
We could argue that plastic is one of the most important discoveries in the history of humanity. That guy from The Graduate certainly would. We could argue it’s one of the worst. But we also just have to realize that our modern lives, and especially all those devices we carry around, wouldn’t be possible without plastics and other synthetic polymers. There’s a future where instead of running out to the store for certain items, we just 3d print them. Maybe we even make filament from printed materials we no longer need. The move to recyclable materials for packaging helps reduce the negative impacts of plastics. But so does just consuming less. Except devices. We obviously need the latest and greatest of each of those all the time!
Here’s the thing, half of plastics are single-purpose. Much of it is packaging like containers and wrappers. But can you imagine life without the 380 million tons of plastics the world produces a year? Just look around right now. Couldn’t tell you how many parts of this microphone, computer, and all the cables and adapters are made of it. How many couldn’t be made by anything else. There was a world without plastics for thousands of years of human civilization. We’ll look at one of those single-purpose plastic-heavy industries called fast food in an episode soon. But it’s not the plastics that are such a problem. It’s the wasteful rampant consumerism. When I take out my recycling I can’t help but think that what goes in the recycling versus compost versus garbage is as much a symbol of who I want to be as what I actually end up eating and relying on to live. And yet, I remain hopeful for the world in that these discoveries can actually end up bringing us back into harmony with the world around us without reverting to luddites and walking back all of these amazing developments like we see in the science fiction dystopian futures.
ANTIC Episode 79 - Basically MyTek and Nir
In this episode of ANTIC The Atari 8-Bit Computer Podcast… we discuss all the great work that MyTek is doing with Atari hardware (including the 576NUC), Nir Dary surprises all of the hosts with (late/early) Christmas (or birthday) Atari gifts, and Randy gets unmercifully teased about his overuse of the word “basically”.
Interview index: here
What We’ve Been Up To
YouTube videos this month
New at Archive.org
New at Github
End of Show Music - https://www.youtube.com/watch?v=2klUVHRWtyk Original Atari 800 POKEY Chiptune by Cobra Commander
ALOHANET was a wireless networking project started at the University of Hawaii in 1968. Initially, it had relatively little to do with ARPANET. But that relative isolation didn't last for long. As the two networks matured and connected together we start to see the first vision of a modern Internet. That alone is interesting, but what brings this story to the next level is the protocol developed for ALOHANET. Ya see, in this wireless network data delivery wasn't guaranteed. Everyone user shared a single radio channel, and terminals could talk over each other. So how did ALOHANET even function?
Selected sources used in this episode:
https://archive.org/details/DTIC_AD0707853 - The initial 1970 ALOHANET report
https://archive.org/details/jresv86n6p591_A1b/page/n3/mode/2up - Summary paper by Kuo, contains a map of ALOHANET
https://sci-hub.do/10.1145/1499949.1499983 - Khan's 1973 PRNET paperhttps://www.eng.hawaii.edu/wp-content/uploads/2020/06/abramson1985-Development-of-the-ALOHANET.pdf - 1985 wrap-up of ALOHANET, by Abramson
Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing
James and John discuss eBay Finds: denim Apple necktie, Apple rubber stamps, and Apple cups. James revisits his Dilbert cel, and news includes new Powerbook 100 battery and upcoming Apple museum in Europe.
Bob Elfstrom, The Magic Room
Interview and research by Kay Savetz.
From 1982 through 1984, Atari ran summer computer camps at several locations around the United States. I covered the Atari camps extensively in a special episode in 2015. Now it's summer 2021, and we're going back to camp!
That first year of the computer camps, in 1982, Atari commissioned a film about its summer camps, about the kids and teachers who were there, about the process of learning about computers, about kids challenging themselves, and about making friends at summer camp. Atari commissioned filmmaker Bob Elfstrom and his partner Lucy Hilmer to make the film. They shot the 26-minute film at the University of California, San Diego campus in 1982. It would be titled The Magic Room and was released the next year.
There are many scenes in the computer lab: we see close-ups of kids concentrating, thinking about the logic of their programming projects. Their faces light up as they solve their problem. There’s an adorable scene with a robotic, computer controlled turtle running across the floor, racing an actual turtle. There's kids riding horses at magic hour, and singing by the campfire, and finally an epic pillow fight, with feathers flying everywhere in the dorm hallways. The end credits were made with an Atari 800, naturally.
This interview is with the filmmaker, Bob Elfstrom. (Lucy Hilmer was unavailable for an interview.) Bob has a long list of film credits to his name. He is known for his work on Johnny Cash! The Man, His World, His Music (1969), and Mysteries of the Sea (1980) -- his IMDB page lists scores of credits.
It's easy to watch The Magic Room (and you should!). It's available at YouTube and Internet Archive.
My interview with Bob took place on June 17 and June 25, 2021.
Watch The Magic Room
The Magic Room Trailer
The story of how “the best-loved application for the Mac” took on Microsoft Works as told by programmer [Bob Hearn in 2003][bob].
Watch Bob Hearn talking about AlphaGo starting at 4m50s.
James and John discuss eBay Finds: Apple PC 5 1/4" floppy drive, Mac TV with box, and Maple Ridge Auction items. They talk about the Wombat from Big Mess o' Wires, and news includes WWDC outcome, Apple seller awards, great collection on Reddit, and coffee table idea.
This is the second interview episode about Computers: Expressway to Tomorrow.
This episode we take a look at the earliest days of computing, and one of the earliest forms of computer memory. Mercury delay lines, originally developed in the early 40s for use in radar, are perhaps one of the strangest technologies I've even encountered. Made primarily from liquid mercury and quartz crystals these devices store digital data as a recirculating acoustic wave. They can only be sequentially accessed. Operations are temperature dependent. And, well, the can also be dangerous to human health. So how did mercury find it's way into some of the first computers?
Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing
The largest global power during the rise of intellectual property was England, so the world adopted her philosophies. The US had the same impact on software law.
Most case law that shaped the software industry is based on copyright law. Our first real software laws appeared in the 1970s and now have 50 years of jurisprudence to help guide us. This episode looks at the laws, supreme court cases, and some circuit appeals cases that shaped the software industry.
In our previous episode we went through a brief review of how the modern intellectual property laws came to be. Patent laws flowed from inventors in Venice in the 1400s, royals gave privileges to own a monopoly to inventors throughout the rest of Europe over the next couple of centuries, transferred to panels and academies during and after the Age of Revolutions, and slowly matured for each industry as technology progressed.
Copyright laws formed similarly, although they were a little behind patent laws due to the fact that they weren’t really necessary until we got the printing press. But when it came to data on a device, we had a case in 1908 we covered in the previous episode that led Congress to enact the 1909 Copyright Act.
Mechanical music boxes evolved into mechanical forms of data storage and computing evolved from mechanical to digital. Following World War II there was an explosion in new technologies, with those in computing funded heavily by US government. Or at least, until we got ourselves tangled up in a very unpopular asymmetrical war in Vietnam. The Mansfield Amendment of 1969, was a small bill in the 1970 Military Authorization Act that ended the US military from funding research that didn’t have a direct relationship to a specific military function. Money could still flow from ARPA into a program like the ARPAnet because we wanted to keep those missiles flying in case of nuclear war. But over time the impact was that a lot of those dollars the military had pumped into computing to help develop the underlying basic sciences behind things like radar and digital computing was about to dry up. This is a turning point: it was time to take the computing industry commercial. And that means lawyers.
And so we got the first laws pertaining to software shortly after the software industry emerged from more and more custom requirements for these mainframes and then minicomputers and the growing collection of computer programmers. The Copyright Act of 1976 was the first major overhaul to the copyright laws since the 1909 Copyright Act. Since then, the US had become a true world power and much as the rest of the world followed the British laws from the Statute of Anne in 1709 as a template for copyright protections, the world looked on as the US developed their laws. Many nations had joined the Berne Convention for international copyright protections, but the publishing industry had exploded. We had magazines, so many newspapers, so many book publishers. And we had this whole weird new thing to deal with: software.
Congress didn’t explicitly protect software in the Copyright Act of 1976. But did add cards and tape as mediums and Congress knew this was an exploding new thing that would work itself out in the courts if they didn’t step in. And of course executives from the new software industry were asking their representatives to get in front of things rather than have the unpredictable courts adjudicate a weird copyright mess in places where technology meets copy protection. So in section 117, Congress appointed the National Commission on New Technological Uses of Copyrighted Works, or CONTU) to provide a report about software and added a placeholder in the act that empaneled them.
CONTU held hearings. They went beyond just software as there was another newish technology changing the world: photocopying. They presented their findings in 1978 and recommended we define a computer program as a set of statements or instructions to be used directly or indirectly in a computer in order to bring about a certain result. They also recommended that copies be allowed if required to use the program and that those be destroyed when the user no longer has rights to the software. This is important because this is an era where we could write software into memory or start installing compiled code onto a computer and then hand the media used to install it off to someone else.
At the time the hobbyist industry was just about to evolve into the PC industry, but hard disks were years out for most of those machines. It was all about floppies. But up-market there was all kinds of storage and the righting was on the wall about what was about to come. Install software onto a computer, copy and sell the disk, move on. People would of course do that, but not legally.
Companies could still sign away their copyright protections as part of a sales agreement but the right to copy was under the creator’s control. But things like End User License Agreements were still far away. Imagine how ludicrous the idea that a piece of software if a piece of software went bad that it could put a company out of business in the 1970s. That would come as we needed to protect liability and not just restrict the right to copy to those who, well, had the right to do so. Further, we hadn’t yet standardized on computer languages. And yet companies were building complicated logic to automate business and needed to be able to adapt works for other computers and so congress looked to provide that right at the direction of CONTU as well, if only to the company doing the customizations and not allowing the software to then be resold. These were all hashed out and put into law in 1980.
And that’s an important moment as suddenly the party who owned a copy was the rightful owner of a piece of software. Many of the provisions read as though we were dealing with book sellers selling a copy of a book, not dealing with the intricate details of the technology, but with technology those can change so quickly and those who make laws aren’t exactly technologists, so that’s to be expected.
Source code versus compiled code also got tested. In 1982 Williams Electronics v Artic International explored a video game that was in a ROM (which is how games were distributed before disks and cassette tapes. Here, the Third Circuit weighed in on whether if the ROM was built into the machine, if it could be copied as it was utilitarian and therefore not covered under copyright. The source code was protected but what about what amounts to compiled code sitting on the ROM. They of course found that it was indeed protected.
They again weighed in on Apple v Franklin in 1983. Here, Franklin Computer was cloning Apple computers and claimed it couldn’t clone the computer without copying what was in the ROMs, which at the time was a remedial version of what we think of as an operating system today. Franklin claimed the OS was in fact a process or method of operation and Apple claimed it was novel. At the time the OS was converted to a binary language at runtime and that object code was a task called AppleSoft but it was still a program and thus still protected. One and two years later respectively, we got Mac OS 1 and Windows 1.
1986 saw Whelan Associates v Jaslow. Here, Elaine Whelan created a management system for a dental lab on the IBM Series One, in EDL. That was a minicomputer and when the personal computer came along she sued Jaslow because he took a BASIC version to market for the PC. He argued it was a different language and the set of commands was therefore different. But the programs looked structurally similar. She won, as while some literal elements were the same, “the copyrights of computer programs can be infringed even absent copying of the literal elements of the program.” This is where it’s simple to identify literal copying of software code when it’s done verbatim but difficult to identify non-literal copyright infringement.
But this was all professional software. What about those silly video games all the kids wanted? Well, Atari applied for a copyright for one of their games, Breakout. Here, Register of Copyrights, Ralph Oman chose not to Register the copyright. And so Atari sued, winning in the appeal.
There were certainly other dental management packages on the market at the time. But the court found that “copyrights do not protect ideas – only expressions of ideas.” Many found fault with the decision and the Second Circuit heard Computer Associates v Altai in 1992. Here, the court applied a three-step test of Abstraction-Filtration-Comparison to determine how similar products were and held that Altai's rewritten code did not meet the necessary requirements for copyright infringement.
There were other types of litigation surrounding the emerging digital sphere at the time as well. The Computer Fraud and Abuse Act came along in 1986 and would be amended in 89, 94, 96, and 2001. Here, a number of criminal offenses were defined - not copyright but they have come up to criminalize activities that should have otherwise been copyright cases. And the Copyright Act of 1976 along with the CONTU findings were amended to cover the rental market came to be (much as happened with VHS tapes and Congress established provisions to cover that in 1990. Keep in mind that time sharing was just ending by then but we could rent video games over dial-up and of course VHS rentals were huge at the time.
Here’s a fun one, Atari infringed on Nintendo’s copyright by claiming they were a defendant in a case and applying to the Copyright Office to get a copy of the 10NES program so they could actually infringe on their copyright. They tried to claim they couldn’t infringe because they couldn’t make games unless they reverse engineered the systems. Atari lost that one. But Sega won a similar one soon thereafter because playing more games on a Sega was fair use. Sony tried to sue Connectix in a similar case where you booted the PlayStation console using a BIOS provided by Connectix. And again, that was reverse engineering for the sake of fair use of a PlayStation people payed for. Kinda’ like jailbreaking an iPhone, right? Yup, apps that help jailbreak, like Cydia, are legal on an iPhone. But Apple moves the cheese so much in terms of what’s required to make it work so far that it’s a bigger pain to jailbreak than it’s worth. Much better than suing everyone.
Laws are created and then refined in the courts. MAI Systems Corp. v. Peak Computer made it to the Ninth Circuit Court of Appeals in 1993. This involved Eric Francis leaving MAI and joining Peak. He then loaded MAI’s diagnostics tools onto computers. MAI thought they should have a license per computer, but yet Peak used the same disk in multiple computers. The crucial change here was that the copy made, while ephemeral, was decided to be a copy of the software and so violated the copyright. We said we’d bring up that EULA though. In 1996, the Seventh Circuit found in ProCD v Zeidenberg, that the license preempted copyright thus allowing companies to use either copyright law or a license when seeking damages and giving lawyers yet another reason to answer any and all questions with “it depends.”
One thing was certain, the digital world was coming fast in those Clinton years. I mean, the White House would have a Gopher page and Yahoo! would be on display at his second inauguration. So in 1998 we got the Digital Millennium Copyright Act (DMCA). Here, Congress added to Section 117 to allow for software copies if the software was required for maintenance of a computer. And yet software was still just a set of statements, like instructions in a book, that led the computer to a given result. The DMCA did have provisions to provide treatment to content providers and e-commerce providers. It also implemented two international treaties and provided remedies for anti-circumvention of copy-prevention systems since by then cracking was becoming a bigger thing. There was more packed in here. We got MAI Systems v Peak Computer reversed by law, refinement to how the Copyright Office works, modernizing audio and movie rights, and provisions to facilitate distance education. And of course the DMCA protected boat hull designs because, you know, might as well cram some stuff into a digital copyright act.
In addition to the cases we covered earlier, we had Mazer v Stein, Dymow v Bolton, and even Computer Associates v Altai, which cemented the AFC method as the means most courts determine copyright protection as it extends to non-literal components such as dialogue and images. Time and time again, courts have weighed in on what fair use is because the boundaries are constantly shifting, in part due to technology, but also in part due to shifting business models.
One of those shifting business models was ripping songs and movies. RealDVD got sued by the MPAA for allowing people to rip DVDs. YouTube would later get sued by Viacom but courts found no punitive damages could be awarded. Still, many online portals started to scan for and filter out works they could know were copy protected, especially given the rise of machine learning to aid in the process. But those were big, major companies at the time. IO Group, Inc sued Veoh for uploaded video content and the judge found Veoh was protected by safe harbor.
Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, which shields online portals and internet service providers from copyright infringement. This would be separate from Section 230, which protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited.
In 2016, Cisco sued Arista for allegedly copying the command line interfaces to manage switches. Cisco lost but had claimed more than $300 million in damages. Here, the existing Cisco command structure allowed Arista to recruit seasoned Cisco administrators to the cause. Cisco had done the mental modeling to evolve those commands for decades and it seemed like those commands would have been their intellectual property. But, Arista hadn’t copied the code.
Then in 2017, in ZeniMax vs Oculus, ZeniMax wan a half billion dollar case against Oculus for copying their software architecture.
And we continue to struggle with what copyright means as far as code goes. Just in 2021, the Supreme Court ruled in Google v Oracle America that using application programming interfaces (APIs) including representative source code can be transformative and fall within fair use, though did not rule if such APIs are copyrightable. I’m sure the CP/M team, who once practically owned the operating system market would have something to say about that after Microsoft swooped in with and recreated much of the work they had done. But that’s for another episode.
And traditional media cases continue. ABS Entertainment vs CBS looked at whether digitally remastering works extended copyright. BMG vs Cox Communications challenged peer-to-peer file-sharing in safe harbor cases (not to mention the whole Napster testifying before congress thing). You certainly can’t resell mp3 files the way you could drop off a few dozen CDs at Tower Records, right? Capitol Records vs ReDigi said nope. Perfect 10 v Amazon, Goldman v Breitbart, and so many more cases continued to narrow down who and how audio, images, text, and other works could have the right to copy restricted by creators. But sometimes it’s confusing. Dr. Seuss vs ComicMix found that merging Star Trek and “Oh, the Places You’ll Go” was enough transformativeness to break the copyright of Dr Seuss, or was that the Fair Use Doctrine? Sometimes I find conflicting lines in opinions. Speaking of conflict…
Is the government immune from copyright? Allen v Cooper, Governor of North Carolina made it to the Supreme Court, where they applied blanket copyright protections. Now, this was a shipwreck case but extended to digital works and the Supreme Court seemed to begrudgingly find for the state, and looked to a law as remedy rather than awarding damages. In other words, the “digital Blackbeards” of a state could pirate software at will. Guess I won’t be writing any software for the state of North Carolina any time soon!
But what about content created by a state? Well, the state of Georgia makes various works available behind a paywall. That paywall might be run by a third party in exchange for a cut of the proceeds. So Public.Resource goes after anything where the edict of a government isn’t public domain. In other words, court decision, laws, and statutes should be free to all who wish to access them. The “government edicts doctrine” won in the end and so access to the laws of the nation continue to be free.
What about algorithms? That’s more patent territory when they are actually copyrightable, which is rare. Gottschalk v. Benson was denied a patent for a new way to convert binary-coded decimals to numerals while Diamond v Diehr saw an algorithm to run a rubber molding machine was patentable. And companies like Intel and Broadcom hold thousands of patents for microcode for chips.
What about the emergence of open source software and the laws surrounding social coding? We’ll get to the emergence of open source and the consequences in future episodes!
One final note, most have never heard of the names in early cases. Most have heard of the organizations listed in later cases. Settling issues in the courts has gotten really, really expensive. And it doesn’t always go the way we want. So these days, whether it’s Apple v Samsung or other tech giants, the law seems to be reserved for those who can pay for it. Sure, there’s the Erin Brockovich cases of the world. And lady justice is still blind. We can still represent ourselves, case and notes are free. But money can win cases by having attorneys with deep knowledge (which doesn’t come cheap). And these cases drag on for years and given the startup assembly line often halts with pending legal actions, not many can withstand the latency incurred. This isn’t a “big tech is evil” comment as much as “I see it and don’t know a better rubric but it’s still a thing” kinda’ comment.
Here’s something better that we’d love to have a listener take away from this episode. Technology is always changing. Laws usually lag behind technology change as (like us) they’re reactive to innovation. When those changes come, there is opportunity. Not only has the technological advancement gotten substantial enough to warrant lawmaker time, but the changes often create new gaps in markets that new entrants can leverage. Either leaders in markets adapt quickly or see those upstarts swoop in, having no technical debt and being able to pivot faster than those who previously might have enjoyed a first user advantage. What laws are out there being hashed out, just waiting to disrupt some part of the software market today?
James and Derek from Mac Folklore Radio discuss eBay Finds: Mac global sales training card, Twentieth Anniversary Macintosh, and Apple apron. We get an update on MFR, and news includes the upcoming WWDC, an important Jobs email, a Mac tattoo, and NanoRaptor creations.
Kay and Carrington take a midnight train going anywhere into a land of magic, but too few resources to play it any way they want it (that's the way they need it). Will the Grue Crew and this game go their separate ways, or will they embrace this capital-J Journey with open arms?
Once upon a time, the right to copy text wasn’t really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history.
The printing press spread from Gutenberg’s workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further.
England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers’ Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society.
The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They’d clearly become less protectionist and more concerned about a literate society.
There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that’s been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish.
Still, this wasn’t exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review.
English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested.
They weren’t just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they’re easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they’d long been removed from the monarchal control and handed to various public institutions.
The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutions throughout the 1820s , the end of Feudalism, Napoleon. But the wars settled down and a world order of sorts came during the late 1800s. One aspect of that world order was the Berne Convention, which was signed in 1886. This established the bilateral recognition of copyrights among sovereign nations that signed onto the treaty, rather than have various nations enter into pacts between one another. Now, the right to copy works were automatically in force at creation, so authors no longer had to register their mark in Berne Convention countries.
Following the Age of Revolutions, there was also an explosion of inventions around the world. Some ended up putting copyrighted materials onto reproducible forms. Early data storage. Previously we could copyright sheet music but the introduction of the player piano led to the need to determine the copyright ability of piano rolls in White-Smith Music v. Apollo in 1908. Here we saw the US Supreme Court find that these were not copies as interpreted in the US Copyright Act because only a machine could read them and they basically told congress to change the law. So Congress did.
The Copyright Act of 1909 then specified that even if only a machine can use information that’s protected by copyright, the copyright protection remains. And so things sat for a hot minute as we learned first mechanical computing, which is patentable under the old rules and then electronic computing which was also patentable. Jacquard patented his punch cards in 1801. But by the time Babbage and Lovelace used them in his engines that patent had expired. And the first digital computer to get a patent was the Eckert-Mauchly ENIAC, which was filed in 1947, granted in 1964, and because there was a prior unpatented work, overturned in 1973. Dynamic RAM was patented in 1968. But these were for physical inventions.
Software took a little longer to become a legitimate legal quandary. The time it took to reproduce punch cards and the lack of really mass produced software didn’t become an issue until after the advent of transistorized computers with Whirlwind, the DEC PDP, and the IBM S/360.
Inventions didn’t need a lot of protections when they were complicated and it took years to build one. I doubt the inventor of the Antikythera Device in Ancient Greece thought to protect their intellectual property because they’d of likely been delighted if anyone else in the world would have thought to or been capable of creating what they created. Over time, the capabilities of others rises and our intellectual property becomes more valuable because progress moves faster with each generation. Those Venetians saw how technology and automation was changing the world and allowed the protection of inventions to provide a financial incentive to invent. Licensing the commercialization of inventions then allows us to begin the slow process of putting ideas on a commercialization assembly line.
Books didn’t need copyright until they could be mass produced and were commercially viable. That came with mass production. A writer writes, or creates intellectual property and a publisher prints and distributes. Thus we put the commercialization of literature and thoughts and ideas on an assembly line. And we began doing so far before the Industrial Revolution.
Once there were more inventions and some became capable of mass producing the registered intellectual property of others, we saw a clash in copyrights and patents. And so we got the Copyright Act of 1909. But with digital computers we suddenly had software emerging as an entire industry. IBM had customized software for customers for decades but computer languages like FORTRAN and mass storage devices that could be moved between computers allowed software to be moved between computers and sometimes entire segments of business logic moved between companies based on that software. By the 1960s, companies were marketing computer programs as a cottage industry.
The first computer program was deposited at the US Copyright Office in 1961. It was a simple thing. A tape with a computer program that had been filed by North American Aviation. Imagine the examiners looking at it with their heads cocked to the side a bit. “What do we do with this?” They hadn’t even figured it out when they got three more from General Dynamics and two more programs showed up from a student at Columbia Law.
A punched tape held a bunch of punched cards. A magnetic tape just held more punched tape that went faster. This was pretty much what those piano rolls from the 1909 law had on them. Registration was added for all five in 1964. And thus software copyright was born. But of course it wasn’t just a metallic roll that had impressions for when a player piano struck a hammer. If someone found a roll on the ground, they could put it into another piano and hit play. But the likelihood that they could put reproduce the piano roll was low. The ability to reproduce punch cards had been there. But while it likely didn’t take the same amount of time it took to reproduce a copy Plato’s Republic before the advent of the printing press, the occurrences weren’t frequent enough to encounter a likely need for adjudication. That changed with high speed punch devices and then the ability to copy magnetic tape.
Contracts (which we might think of as EULAs today in a way) provided a license for a company to use software, but new questions were starting to form around who was bound to the contract and how protection was extended based on a number of factors. Thus the LA, or License Agreement part of EULA rather than just a contract when buying a piece of software.
And this brings us to the forming of the modern software legal system. That’s almost a longer story than the written history we have of early intellectual property law, so we’ll pick that up in the next episode of the podcast!
James and John discuss eBay Finds: Inside Macintosh Promo Edition, Mac Colour Classic, and iPhone Sales Training Workbook. Michael Mulhern joins to discuss Internet archiving, and news includes Apple Lossless Audio, Why Mac returns, a new Apple museum.
Coverage of the remaining magazines for Nov 1982.
ANTIC Episode 78 - The Extremely Elderly Computer Geeks Club
In this episode of ANTIC The Atari 8-Bit Computer Podcast… We discuss lots of new things you can do with your FujiNet, the differences in FujiNet versions, the Old Computer Geeks Club, and other recent Atari news...
Interview index: here
What We’ve Been Up To
YouTube videos this month
New at Archive.org
Where did educational games come from? According to some, the practice of using games in classrooms started in the early 60s with the appearance of the Sumerian Game. However, the story is more complicated than that. This episode we dive into the Sumerian Game, some of the earliest educational games, and the bizarre legacy of a lost piece of software.
Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing
ANTIC Interview 417 - Computers: Expressway to Tomorrow
James and John discuss eBay Finds: postcard pair, boxed Colour Classic, Newton MessagePad 100, and Apple sales poster. John works with the BlueSCSI, and news includes repurposing a TAM, shipping woes, System 7 anniversary, Apple Identity Guide.
The TI TMS9900 is a fascinating microprocessor. It was the first 16-bit microprocessor on the market, it has a unique architecture that makes it well suited to multitasking, and it was on IBM's shortlist to power the PC. Today we are looking at this strange chip, and the TI minicomputers that predated it's design. Along the way we will construct a theoretical TI-powered PC, and see how home computing could have changed if IBM took a slightly different path.
Texts are sent and received using SMS, or Short Message Service. Due to the amount of bandwidth available on second generation networks, they were limited to 160 characters initially. You know the 140 character max from Twitter, we are so glad you chose to join us on this journey where we weave our way from the topmast of the 1800s to the skinny jeans of San Francisco with Twitter.
What we want you to think about through this episode is the fact that this technology has changed our lives. Before texting we had answering machines, we wrote letters, we sent more emails but didn’t have an expectation of immediate response. Maybe someone got back to us the next day, maybe not. But now, we rely on texting to coordinate gatherings, pick up the kids, get a pin on a map, provide technical support, send links, send memes, convey feelings in ways that we didn’t do when writing letters. I mean including an animated gif in a letter meant melty peanut butter. Wait, that’s jif. Sorry.
And few technologies have sprung into our every day use so quickly in the history of technology. It took generations if not 1,500 years for bronze working to migrate out of the Vinča Culture and bring an end to the Stone Age. It took a few generations if not a couple of hundred years for electricity to spread throughout the world. The rise of computing took a few generations to spread from first mechanical then to digital and then to personal computing and now to ubiquitous computing. And we’re still struggling to come to terms with job displacement and the productivity gains that have shifted humanity more rapidly than any other time including the collapse of the Bronze Age.
But the rise of cellular phones and then the digitization of them combined with globalization has put instantaneous communication in the hands of everyday people around the world. We’ve decreased our reliance on paper and transporting paper and moved more rapidly into a digital, even post-PC era. And we’re still struggling to figure out what some of this means. But did it happen as quickly as we identify? Let’s look at how we got here.
Bell Telephone introduced the push button phone in 1963 to replace the rotary dial telephone that had been invented in 1891 and become a standard. And it was only a matter of time before we’d find a way to associate letters to it. Once we could send bits over devices instead of just opening up a voice channel it was only a matter of time before we’d start sending data as well. Some of those early bits we sent were things like typing our social security number or some other identifier for early forms of call routing. Heck the fax machine was invented all the way back in 1843 by a Scottish inventor called Alexander Bain.
So given that we were sending different types of data over permanent and leased lines it was only a matter of time before we started doing so over cell phones.
The first cellular networks were analog in what we now think of as first generation, or 1G. GSM, or Global System for Mobile Communications is a standard that came out of the European Telecommunications Standards Institue and started getting deployed in 1991. That became what we now think of as 2G and paved the way for new types of technologies to get rolled out.
The first text message simply said “Merry Christmas” and was sent on December 3rd, 1992. It was sent to Richard Jarvis at Vodafone by Neil Papworth. As with a lot of technology it was actually thought up eight years earlier by Bernard Ghillabaert and Friedhelm Hillebrand. From there, the use cases moved to simply alerting devices of various statuses, like when there was a voice mail. These days we mostly use push notification services for that.
To support using SMS for that, carriers started building out SMS gateways and by 1993 Nokia was the first cell phone maker to actually support end-users sending text messages. Texting was expensive at first, but adoption slowly increased. We could text in the US by 1995 but cell phone subscribers were sending less than 6 texts a year on average. But as networks grew and costs came down, adoption increased up to a little over one a day by the year 2000.
Another reason adoption was slow was because using multi-tap to send a message sucked. Multi-tap was where we had to use the 10-key pad on a device to type out messages. You know, ABC are on a 2 key so the first type you tap two it’s the number the next time it’s an A, the next a B, the next a C. And the 3 key is D, E, and F. The 4 is G, H, and I and the 5 is J, K, and L. The 6 is M, N, and O and the 7 is P, Q, R, and S. The 8 is T, U, and V and the 9 is W, X, Y, and Z. This layout goes back to old bell phones that had those letters printed under the numbers. That way if we needed to call 1-800-PODCAST we could map which letters went to what.
A small company called Research in Motion introduced an Inter@active Pager in 1996 to do two-way paging. Paging services went back decades. My first was a SkyTel, which has its roots in Mississippi when John N Palmer bought a 300 person paging company using an old-school radio paging service. That FCC license he picked up evolved to more acquisitions through Alabama, Loisiana, New York and by the mid-80s growing nationally to 30,000 subscribers in 1989 and over 200,000 less than four years later. A market validated, RIM introduced the BlackBerry on the DataTAC network in 2002, expanding from just text to email, mobile phone services, faxing, and now web browsing. We got the Treo the same year. But that now iconic Blackberry keyboard. Nokia was the first cellular device maker to make a full keyboard for their Nokia 9000i Communicator in 1997, so it wasn’t an entirely new idea.
But by now, more and more people were thinking of what the future of Mobility would look like. The 3rd Generation Partnership Project, or 3GPP was formed in 1998 to dig into next generation networks. They began as an initiative at Nortel and AT&T but grew to include NTT DoCoMo, British Telecom, BellSouth, Ericsson, Telnor, Telecom Italia, and France Telecom - a truly global footprint. With a standards body in place, we could move faster and they began planning the roadmap for 3G and beyond (at this point we’re on 5G).
Faster data transfer rates let us do more. We weren’t just sending texts any more. MMS, or Multimedia Messaging Service was then introduced and use grow to billions and then hundreds of millions of photos sent encoded using technology like what we do with MIME for multimedia content on websites. At this point, people were paying a fee for every x number of messages and ever MMS. Phones had cameras now so in a pre-Instagram world this was how we were to share them. Granted they were blurry by modern standards, but progress. Devices became more and more connected as data plans expanded to eventually often be unlimited.
But SMS was still slow to evolve in a number of ways. For example, group chat was not really much of a thing. That is, until 2006 when a little company called Twitter came along to make it easy for people to post a message to their friends. Initially it worked over text message until they moved to an app. And texting was used by some apps to let users know there was data waiting for them. Until it wasn’t. Twilio was founded in 2008 to make it easy for developers to add texting to their software. Now every possible form of text integration was as simple as importing a framework.
Apple introduced the Apple Push Notification service, or APNs in 2009. By then devices were always connected to the Internet and the send and receive for email and other apps that were fine on desktops were destroying battery life. APNs then allowed developers to build apps that could only establish a communication channel when they had data. Initially we used 256 bytes in push notifications but due to the popularity and different implementation needs, notifications could grow to 2 kilobytes in 2015 and moved to an HTTP/2 interface and a 4k payload in 2015. This is important because it paved the way for iChat, now called iMessage or just Messages - and then other similar services for various platforms that moved instant messaging off SMS and over to the vendor who builds a device rather than using SMS or MMS messaging.
Facebook Messenger came along in 2011, and now the kids use Instagram messaging, Snapchat, Signal or any number of other messaging apps. Or they just text. It’s one of a billion communications tools that also include Discord, Slack, Teams, LinkedIn, or even the in-game options in many a game. Kinda’ makes restricting communications a bit of a challenge at this point and restricting spam.
My kid finishes track practice early. She can just text me. My dad can’t make it to dinner. He can just text me. And of course I can get spam through texts. And everyone can message me on one of about 10 other apps on my phone. And email. On any given day I receive upwards of 300 messages, so sometimes it seems like I could just sit and respond to messages all day every day and still never be caught up. And get this - we’re better for it all. We’re more productive, we’re more well connected, and we’re more organized. Sure, we need to get better at having more meaningful reactions when we’re together in person. We need to figure out what a smaller, closer knit group of friends is like and how to be better at being there for them rather than just sending a sad face in a thread where they’re indicating their pain.
But there’s always a transition where we figure out how to embrace these advances in technology. There are always opportunities in the advancements and there are always new evolutions built atop previous evolutions. The rate of change is increasing. The reach of change is increasing. And the speed changes propagate are unparalleled today. Some will rebel against changes, seeking solace in older ways. It’s always been like that - the Amish can often be seen on a buggy pulled by a horse so a television or phone capable of texting would certainly be out of the question. Others embrace technology faster than some of us are ready for. Like when I realized some people had moved away from talking on phones and were pretty exclusively texting. Spectrums.
I can still remember picking up the phone and hearing a neighbor on with a friend. Party lines were still a thing in Dahlonega, Georgia when I was a kid. I can remember the first dedicated line and getting in trouble for running up a big long distance bill. I can remember getting our first answering machine and changing messages on it to be funny. Most of that was technology that moved down market but had been around for a long time. The rise of messaging on the cell phone then smart phone though - that was a turning point that started going to market in 1993 and within 20 years truly revolutionized human communication. How can we get messages faster than instant? Who knows, but I look forward to finding out.
James and John discuss eBay Finds: iMac piggy bank, Apple IIGS Woz Edition, and black Quadra 700. Jeremy returns to the podcast with a show of his own, Jeremy's Retro Bar. News includes classic Mac NFTs on iPad, Lego Mac Plus, and BlueSCSI.
Java, Ruby, PHP, Go. These are web applications that dynamically generate code then interpreted as a file by a web browser. That file is rarely static these days and the power of the web is that an app or browser can reach out and obtain some data, get back some xml or json or yaml, and provide an experience to a computer, mobile device, or even embedded system. The web is arguably the most powerful, transformational technology in the history of technology.
But the story of the web begins in philosophies that far predate its inception. It goes back to a file, which we can think of as a document, on a computer that another computer reaches out to and interprets. A file comprised of hypertext. Ted Nelson coined the term hypertext. Plenty of others put the concepts of linking objects into the mainstream of computing. But he coined the term that he’s barely connected to in the minds of many. Why is that?
Tim Berners-Lee invented the World Wide Web in 1989. Elizabeth Feinler developed a registry of names that would evolve into DNS so we could find computers online and so access those web sites without typing in impossible to remember numbers. Bob Kahn and Leonard Kleinrock were instrumental in the Internet Protocol, which allowed all those computers to be connected together, providing the schemes for those numbers. Some will know these names; most will not.
But a name that probably doesn’t come up enough is Ted Nelson. His tale is one of brilliance and the early days of computing and the spread of BASIC and an urge to do more. It’s a tale of the hacker ethic. And yet, it’s also a tale of irreverence - to be used as a warning for those with aspirations to be remembered for something great. Or is it?
Steve Jobs famously said “real artists ship.” Ted Nelson did ship. Until he didn’t. Let’s go all the way back to 1960, when he started Project Xanadu. Actually, let’s go a little further back first.
Nelson was born to TV directory Ralph Nelson and Celeste Holm, who won an Academy Award for her role in Gentleman’s Agreement in 1947 and took home another pair of nominations through her career, and for being the original Ado Annie in Oklahoma. His dad worked on The Twilight Zone - so of course he majored in philosophy at Swarthmore College and then went off to the University of Chicago and then Harvard for graduate school, taking a stab at film after he graduated. But he was meant for an industry that didn’t exist yet but would some day eclipse the film industry: software.
While in school he got exposed to computers and started to think about this idea of a repository of all the world’s knowledge. And it’s easy to imagine a group of computing aficionados sitting in a drum circle, smoking whatever they were smoking, and having their minds blown by that very concept. And yet, it’s hard to imagine anyone in that context doing much more. And yet he did.
Nelson created Project Xanadu in 1960. As we’ll cover, he did a lot of projects during the remainder of his career. The Journey is what is so important, even if we never get to the destination. Because sometimes we influence the people who get there. And the history of technology is as much about failed or incomplete evolutions as it is about those that become ubiquitous.
It began with a project while he was enrolled in Harvard grad school. Other word processors were at the dawn of their existence. But he began thinking through and influencing how they would handle information storage and retrieval.
Xanadu was supposed to be a computer network that connected humans to one another. It was supposed to be simple and a scheme for world-wide electronic publishing. Unlike the web, which would come nearly three decades later, it was supposed to be bilateral, with broken links self-repairing, much as nodes on the ARPAnet did. His initial proposal was a program in machine language that could store and display documents. Being before the advent of Markdown, ePub, XML, PDF, RTF, or any of the other common open formats we use today, it was rudimentary and would evolve over time. Keep in mind. It was for documents and as Nelson would say later, the web - which began as a document tool, was a fork of the project.
The term Xanadu was borrowed from Samuel Taylor Coleridge’s Kubla Khan, itself written after some opium fueled dreams about a garden in Kublai Khan’s Shangdu, or Xanadu.In his biography, Coleridge explained the rivers in the poem supply “a natural connection to the parts and unity to the whole” and he said a “stream, traced from its source in the hills among the yellow-red moss and conical glass-shaped tufts of bent, to the first break or fall, where its drops become audible, and it begins to form a channel.”
Connecting all the things was the goal and so Xanadu was the name. He gave a talk and presented a paper called “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery in 1965 that laid out his vision. This was the dawn of interactivity in computing. Digital Equipment had launched just a few years earlier and brought the PDP-8 to market that same year. The smell of change was in the air and Nelson was right there.
After that, he started to see all these developments around the world. He worked on a project at Brown University to develop a word processor with many of his ideas in it. But the output of that project, as with most word processors since - was to get things printed. He believed content was meant to be created and live its entire lifecycle in the digital form. This would provide perfect forward and reverse citations, text enrichment, and change management. And maybe if we all stand on the shoulders of giants, it would allow us the ability to avoid rewriting or paraphrasing the works of others to include them in own own writings. We could do more without that tedious regurgitation.
He furthered his counter-culture credentials by going to Woodstock in 1969. Probably not for that reason, but it happened nonetheless. And he traveled and worked with more and more people and companies, learning and engaging and enriching his ideas. And then he shared them.
Computer Lib/Dream Machines was a paperback book. Or two. It had a cover on each side. Originally published in 1974, it was one of the most important texts of the computer revolution. Steven Levy called it an epic. It’s rare to find it for less than a hundred bucks on eBay at this point because of how influential it was and what an amazing snapshot in time it represents.
Xanadu was to be a hypertext publishing system in the form of Xanadocs, or files that could be linked to from other files. A Xanadoc used Xanalinks to embed content from other documents into a given document. These spans of text would become transclusions and change in the document that included the content when they changed in the live document. The iterations towards working code were slow and the years ticked by. That talk in 1965 gave way to the 1970s, then 80s. Some thought him brilliant. Others didn’t know what to make of it all. But many knew of his ideas for hypertext and once known it became deterministic.
Byte Magazine published many of his thoughts in 1988 called “Managing Immense Storage” and by then the personal computer revolution had come in full force. Tim Berners-Lee put the first node of the World Wide Web online the next year, using a protocol they called Hypertext Transfer Protocol, or http. Yes, the hypertext philosophy was almost a means of paying homage to the hard work and deep thinking Nelson had put in over the decades. But not everyone saw it as though Nelson had made great contributions to computing.
“The Curse of Xanadu” was an article published in Wired Magazine in 1995. In the article, the author points out the fact that the web had come along using many of the ideas Nelson and his teams had worked on over the years but actually shipped - whereas Nelson hadn’t. Once shipped, the web rose in popularity becoming the ubiquitous technology it is today. The article looked at Xanadu as vaporware. But there is a deeper, much more important meaning to Xanadu in the history of computing.
Perhaps inspired by the Wired article, the group released an incomplete version of Xanadu in 1998. But by then, other formats - including PDF which was invented in 1993 and .doc for Microsoft Word, were the primary mechanisms we stored documents and first gopher and then the web were spreading to interconnect humans with content.
The Xanadu story isn’t a tragedy. Would we have had hypertext as a part of Douglas Engelbart’s oNLine System without it? Would we have object-oriented programming or later the World Wide Web without it? The very word hypertext is almost an homage, even if they don’t know it, to Nelson’s work. And the look and feel of his work lives on in places like GitHub, whether directly influenced or not, where we can see changes in code side-by-side with actual production code, changes that are stored and perhaps rolled back forever.
Larry Tessler coined the term Cut and Paste. While Nelson calls him a friend in Werner Herzog’s Lo and Behold, Reveries of the Connected World, he also points out that Tessler’s term is flawed. And I think this is where we as technologists have to sometimes trim down our expectations of how fast evolutions occur. We take tiny steps because as humans we can’t keep pace with the rapid rate of technological change. We can look back and see a two steps forward and one step back approach since the dawn of written history. Nelson still doesn’t think the metaphors that harken back to paper have any place in the online written word.
Here’s another important trend in the history of computing. As we’ve transitioned to more and more content living online exclusively, the content has become diluted. One publisher I wrote online pieces for asked that they all be +/- 700 words and asked that paragraphs be no more than 4 sentences long (preferably 3) and the sentences should be written at about a 5th or 6th grade level. Maybe Nelson would claim that this de-evolution of writing is due to search engine optimization gamifying the entirety of human knowledge and that a tool like Xanadu would have been the fix. After all, if we could borrow the great works of others we wouldn’t have to paraphrase them. But I think as with most things, it’s much more nuanced than that.
Our always online, always connected brains can only accept smaller snippets. So that’s what we gravitate towards. Actually, we have plenty of capacity for whatever we actually choose to immerse ourselves into. But we have more options than ever before and we of course immerse ourselves into video games or other less literary pursuits. Or are they more literary? Some generations thought books to be dangerous. As do all oppressors. So who am I to judge where people choose to acquire knowledge or what kind they indulge themselves in. Knowledge is power and I’m just happy they have it. And they have it in part because others were willing to water own the concepts to ship a product. Because the history of technology is about evolutions, not revolutions. And those often take generations. And Nelson is responsible for some of the evolutions that brought us the ht in http or html. And for that we are truly grateful!
As with the great journey from Lord of the Rings, rarely is greatness found alone. The Xanadu adventuring party included Cal Daniels, Roger Gregory, Mark Miller, Stuart Greene, Dean Tribble, Ravi Pandya, became a part of Autodesk in the 80s, got rewritten in Smalltalk, was considered a rival to the web, but really is more of an evolutionary step on that journey. If anything it’s a divergence then convergence to and from Vannevar Bush’s Memex.
So let me ask this as a parting thought? Are the places you are not willing to sacrifice any of your core designs or beliefs worth the price being paid? Are they worth someone else ending up with a place in the history books where (like with this podcast) we oversimplify complex topics to make them digestible? Sometimes it’s worth it. In no way am I in a place to judge the choices of others. Only history can really do that - but when it happens it’s usually an oversimplification anyways… So the building blocks of the web lie in irreverence - in hypertext. And while some grew out of irreverence and diluted their vision after an event like Woodstock, others like Nelson and his friend Douglas Englebart forged on. And their visions didn’t come with commercial success. But as an integral building block to the modern connected world today they represent as great a mind as practically anyone else in computing.
Bob Puff, Computer Software Services
Bob Puff is owner of Computer Software Services, a company that began creating hardware and software for the Atari 8-bit computers in 1982. Bob became president of the company in 1991. He designed a bevy of hardware products for the Atari computers, including The Black Box, a hard drive host adapter; The Multiplexer, a networking system; the UltraSpeed Plus operating system upgrade; upgrades for the XF551 floppy drive; the Super-E Burner EPROM burner; and others. He also created a number of popular utility programs, including the BobTerm terminal program; Disk Communicator, to convert boot disks to a single compressed file for transfer over modem; and MYDOS version 4.53; among other software.
This interview took place on April 27, 2021.
Computer Software Services legacy site
1993 Computer Software Services catalog scan
ANTIC Interview 393 - Charles Marslett, MYDOS and FastChip
Format experiment: splitting magazine coverage across two episodes and separating game reviews to their own episodes.
The FBI’s attempted investigation of the nuPrometheus League.
I wish there was a dramatic conclusion to this 1990 editorial, but we’ve heard nothing from the nuPrometheus League since their first and only dispatch.
Original text from Macworld Magazine, September 1990.
Project Xanadu, started in 1960, is perhaps the oldest hypertext system. It's creator, Ted Nelson, coined the term hypertext just to describe Xanadu. But it's not just a tool for linking data. Nelson's vision of hypertext is a lot more complicated than what we see in the modern world wide web. In his view, hypertext is a means to reshape the human experience. Today we are starting a dive into the strange connection between hypertext, networking, and digital utopianism.
James and John discuss eBay Finds: 128k Mac with box, Alice reproduction floppy, and Newton dummy. They get Mission Starlight and Sky Shadow running, and news is all about the Apple Spring Loaded event.
Valerie (Atkinson) Manfull, Atari Game Research Group
Valerie Atkinson was a member of Atari's Game Research Group. Now named Valerie Manfull, she was on the team that designed and programmed the game Excalibur, along with Chris Crawford and Larry Summers. Excalibur was published by Atari Program Exchange in fall 1983. She is also one of the programmes of Ballsong, along with Douglas Crockford. Ballsong is a music and graphics demo program released by Atari, in which a ball bounces on the screen in response to an improvised tune. She was one of the programmers, with Ann Marion, of TV Fishtank, a demonstration of an artificially intelligent fish. (It's unclear if the fishtank program was released anywhere, though it apparently was shown at the 1984 SIGgraph conference.)
This interview took place on April 22, 2021.
ANTIC Episode 4 - Chris Crawford
ANTIC Interview 240 - Douglas Crockford
TV Fishtank at SIGgraph
Jim Leiterman describes TV Fishtank
Chris Crawford describes the development of Excalibur in The Art of Computer Game Design
Excalibur announced in Atari Program Exchange, fall 1983
Excalibur review in Atari Connection
Excalibur at AtariMania
Video of Ballsong
ANTIC Episode 77 - Jason Moore, PhD
In this episode of ANTIC The Atari 8-Bit Computer Podcast… Jason Moore joins us to discuss his atariprojects.org Web site and we discuss all the news rocking the Atari 8-bit world...
Interview index: here
What We’ve Been Up To
YouTube videos this month
New at Archive.org
Kay and Carrington set sail on the virtual high seas to give a talk at this year's JoCo Cruise, an annual oceangoing event for technophiles, tabletop gamers, and creative-minded people. In other words, a bunch of nerds. The perfect venue to discuss all things Infocom.
James and John discuss eBay Finds: Test Drive a Mac brochure, Microloop 1100 Spirometer, Braun/Apple calculator. They look back at Macworld April 1991, and news includes upcoming Apple event rumors, faux Mac BBS, and chimes of death.
This was a hard episode to do. Because telling the story of Instagram is different than explaining the meaning behind it. You see, on the face of it - Instagram is an app to share photos. But underneath that it’s much more. It’s a window into the soul of the Internet-powered culture of the world. Middle schoolers have always been stressed about what their friends think. It’s amplified on Instagram. People have always been obsessed with and copied celebrities - going back to the ages of kings. That too is on Instagram. We love dogs and cute little weird animals. So does Instagram.
Before Instagram, we had photo sharing apps. Like Hipstamatic. Before Instagram, we had social networks - like Twitter and Facebook. How could Instagram do something different and yet, so similar? How could it offer that window into the world when the lens photos are snapped with are as though through rose colored glasses? Do they show us reality or what we want reality to be? Could it be that the food we throw away or the clothes we donate tell us more about us as humans than what we eat or keep? Is the illusion worth billions of dollars a year in advertising revenue while the reality represents our repressed shame?
Think about that as we go through this story.
If you build it, they will come. Everyone who builds an app just kinda’ automatically assumes that throngs of people will flock to the App Store, download the app, and they will be loved and adored and maybe even become rich. OK, not everyone thinks such things - and with the number of apps on the stores these days, the chances are probably getting closer to those that a high school quarterback will play in the NFL. But in todays story, that is exactly what happened.
And Kevin Systrom had already seen it happen. He was offered a job as one of the first employees at Facebook while still going to Stanford. That’ll never be a thing. Then while on an internship he was asked to be one of the first Twitter employees. That’ll never be a thing either. But they were things, obviously!
So in 2010, Systrom started working on an app he called Burbn and within two years sold the company, then called Instagram for one billion dollars. In doing so he and his co-founder Mike Krieger helped forever changing the deal landscape for mergers and acquisitions of apps, and more profoundly giving humanity lenses with which to see a world we want to see - if not reality.
Systrom didn’t have a degree in computer science. In fact, he taught himself to code after working hours, then during working hours, and by osmosis through working with some well-known founders.
Burbn was an app to check in and post plans and photos. It was written in HTML5 and in a Cinderella story, he was able to raise half a million dollars in funding from Baseline Ventures and Andreesen Horowitz, bringing in Mike Krieger as a co-founder.
At the time, Hipstamatic was the top photo manipulation and filtering app. Given that the iPhone came with a camera on-par (if not better) than most digital point and shoots at the time, the pair re-evaluated the concept and instead leaned further into photo sharing, while still maintaining the location tagging.
The original idea was to swipe right and left, as we do in apps like Tinder. But instead they chose to show photos in chronological order and used a now iconic 1:1 aspect ratio, or the photos were square, so there was room on the screen to show metadata and a taste of the next photo - to keep us streaming. The camera was simple, like the Holga camera Systrom had been given while stying abroad when at Stanford. That camera made pictures a little blurry and in an almost filtered way made them loo almost artistic.
After System graduated from Stanford in 2006, he worked at Google, then NextStop, and then got the bug to make his own app. And boy did he. One thing though, even his wife Nicole didn’t think she could take good photos having seen those from a friend of Systrom’s. He said the photos were so good because the filters. And so we got the first filter, X-Pro 2, so she could take great photos on the iPhone 3G.
Krieger shared the first post on Instagram on July 16, 2010 and Systrom followed up within a few hours with a picture of a dog. The first of probably a billion dog photos (including a few of my own). And they officially published Instagram on the App Store in October of 2010.
After adding more and more filters, Systrom and Krieger closed in on one of the greatest growth hacks of any app: they integrated with Facebook, Twitter, and Foursquare so you could take the photo in Instagram and shoot it out to one of those apps - or all three.
At the time Facebook was more of a browser tool. Few people used the mobile app. And for those that did try and post photos on Facebook, doing so was laborious, using a mobile camera roll in the app and taking more steps than needed. Instagram became the perfect glue to stitch other apps together. And rather than always needing to come up with something witty to say like on Twitter, we could just point the camera on our phone at something and hit a button.
The posts had links back to the photo on Instagram. They hit 100,000 users in the first week and a million users by the end of the year. Their next growth hack was to borrow the hashtag concept from Twitter and other apps, which they added in January of 2011.
Remember how Systrom interned at Odeo and turned down the offer to go straight to Twitter after college? Twitter didn’t have photo sharing at the time, but Twitter co-founder Jack Dorsey had showed System plenty of programming techniques and the two stayed in touch. He became an angel investor in a $7 million Series A and the first real influencer on the platform, sending that link to every photo to all of his Twitter followers every time he posted. The growth continued. June, 2011 they hit 5 million users, and doubled to 10 million by September of 2011. I was one of those users, posting the first photo to @krypted in the fall - being a nerd it was of the iOS 5.0.1 update screen and according to the lone comment on the photo my buddy @acidprime apparently took the same photo.
They spent the next few months just trying to keep the servers up and running and released an Android of the app in April of 2012, just a couple of days before taking on $50 million dollars in venture capital. But that didn’t need to last long - they sold the company to Facebook for a billion dollars a few days later, effectively doubling each investor in that last round of funding and shooting up to 50 million users by the end of the month.
At 13 employees, that’s nearly $77 million dollars per employee. Granted, much of that went to Systrom and the investors. The Facebook acquisition seemed great at first. Instagram got access to bigger resources than even a few more rounds of funding would have provided.
Facebook helped them scale up to 100 million users within a year and following Facebook TV, and the brief but impactful release of Vine at Twitter, Instagram added video sharing, photo tagging, and the ability to add links in 2013. Looking at a history of their feature releases, they’re slow and steady and probably the most user-centered releases I’ve seen. And in 2013, they grew to 150 million users, proving the types of rewards that come from doing so.
With that kind of growth it might seem that it can’t last forever - and yet on the back of new editing tools, a growing team, and advertising tools, they managed to hit a staggering 300 million users in 2014.
While they released thoughtful, direct, human sold advertising before, they opened up the ability to buy ads to all advertisers, piggy backing on the Facebook ad selling platform in 2015. That’s the same year they introduced Boomerang, which looped photos in forward and reverse. It was cute for a hot minute.
2016 saw the introduction of analytics that included demographics, impressions, likes, reach, and other tools for businesses to track performance not only of ads, but of posts. As with many tools, it was built for the famous influencers that had the ear of the founders and management team - and made available to anyone. They also introduced Instagram Stories, which was a huge development effort and they owned that they copied it from Snapchat - a surprising and truly authentic move for a Silicon Valley startup. And we could barely call them a startup any longer, shooting over half a billion users by the middle of the year and 600 million by the end of the year.
That year, they also brought us live video, a Windows client, and one of my favorite aspects with a lot of people posting in different languages, they could automatically translate posts.
But something else happened in 2016. Donald Trump was elected to the White House. This is not a podcast about politics but it’s safe to say that it was one of the most divisive elections in recent US history. And one of the first where social media is reported to have potentially changed the outcome. Disinformation campaigns from foreign actors combined with data illegally obtained via Cambridge Analytica on the Facebook network, combined with increasingly insular personal networks and machine learning-driven doubling down on only seeing things that appealed to our world view led to many being able to point at networks like Facebook and Twitter as having been party to whatever they thought the “other side” in an election had done wrong.
Yet Instagram was just a photo sharing site. They put the users at the center of their decisions. They promoted the good things in life. While Zuckerberg claimed that Facebook couldn’t have helped change any outcomes and that Facebook was just an innocent platform that amplified human thoughts - Systrom openly backed Hillary Clinton. And yet, even with disinformation spreading on Instagram, they seemed immune from accusations and having to go to Capital Hill to be grilled following the election. Being good to users apparently has its benefits.
However, some regulation needed to happen. 2017, the Federal Trade Commission steps in to force influencers to be transparent about their relationship with advertisers - Instagram responded by giving us the ability to mark a post as sponsored. Still, Instagram revenue spiked over 3 and a half billion dollars in 2017.
Instagram revenue grew past 6 billion dollars in 2018. Systrom and Krieger stepped away from Instagram that year. It was now on autopilot. Although I think all chief executives have a
Instagram revenue shot over 9 billion dollars in 2019. In those years they released IGTV and tried to get more resources from Facebook, contributing far more to the bottom line than they took.
2020 saw Instagram ad revenue close in on 13.86 billion dollars with projected 2021 revenues growing past 18 billion.
In The Picture of Dorian Gray from 1890, Lord Henry describes the impact of influence as destroying our genuine and true identity, taking away our authentic motivations, and as Shakespeare would have put it - making us servile to the influencer. Some are famous and so become influencers on the product naturally, like musicians, politicians, athletes, and even the Pope. . Others become famous due to getting showcased by the @instagram feed or some other prominent person. These influencers often stage a beautiful life and to be honest, sometimes we just need that as a little mind candy. But other times it can become too much, forcing us to constantly compare our skin to doctored skin, our lifestyle to those who staged their own, and our number of friends to those who might just have bought theirs. And seeing this obvious manipulation gives some of us even more independence than we might have felt before. We have a choice: to be or not to be.
The Instagram story is one with depth. Those influencers are one of the more visible aspects, going back to the first that posted sponsored photos from Snoop Dogg. And when Mark Zuckerberg decided to buy the company for a billion dollars, many thought he was crazy. But once they turned on the ad revenue machine, which he insisted Systrom wait on until the company had enough users, it was easy to go from 3 to 6 to 9 to over 13 and now likely over 18 billion dollars. That’s a greater than 30:1 return on investment, helping to prove that such lofty acquisitions aren’t crazy.
It’s also a story of monopoly, or at least of suspected monopolies. Twitter tried to buy Instagram and Systrom claims to have never seen a term sheet with a legitimate offer. Then Facebook swooped in and helped fast-track regulatory approval of the acquisition. With the acquisition of WhatsApp, Facebook owns four of the top 6 social media sites, with Facebook, WhatsApp, Facebook Messenger, and Instagram all over a billion users and YouTube arguably being more of a video site than a true social network. And they tried to buy Snapchat - only the 17th ranked network.
More than 50 billion photos have been shared through Instagram. That’s about a thousand a second. Many are beautiful...
Linda Brownstein, Atari VP Special Projects
As I've researched Atari and it's 8-bit computer projects over the years, one name has come up over and over again, attached to the most interesting projects. Linda S. Gordon. Executive Director of Atari Computer Camps. Linda. Executive Producer of The Magic Room, Atari's movie about its camps. Atari's collaboration with Club Med to offer computer labs at vacation destinations — Linda again. Atari Club, the fan group that published Atari Age magazine - Linda launched that. More recently, in my interview with Ann Lewin-Benham of the Capital Children's Museum, Linda's name came up once again -- she was the liaison between Atari and the museum. Linda worked on the most interesting projects.
Today, her name is Linda Brownstein. Linda joined Atari in December 1980 as Vice President of Special Projects, where she worked on most of the projects that I mentioned before. In October 1983 she became Senior Vice President in Atari's Education group. She left the company in July 1984 after Jack Tramiel took over the company.
This interview took place on April 21, 2021.
ANTIC Interview 78 - Manny Gerard, The Man Who Fired Nolan
ANTIC Special Episode - Atari Summer Camp
ANTIC Interview 410 - Ann Lewin-Benham, Capital Children's Museum
ANTIC Interview 185 - Ted Kahn
Atari Computer Camps — The Magic Room
Video version of this interview
Even after nearly 50 years C remains a force in the programming world. Anytime you brows the web, or even log into a computer, C is somewhere in the background. This episode I wrap up my series on C by looking at it's early development and spread. We will get into the 1st and 2nd C compilers ever written, and take a look at how a banned book lead to generations of avid C programmers.
James and John discuss eBay Finds: JLPGA PowerBook 170, LaserWriter Plus, and Macintosh IIfx. They examine the Twentieth Anniversary Macintosh Experience CDROM, and news includes WWDC announced, OS 9.2 on Switch, and latest NanoRaptor creations.
Mark Simonson, Atari Artist and Font Designer
Mark Simonson used his Atari computers who create art that was published in magazines in the 1980s, including a portrait of Nolan Bushnell that was commissioned by TWA Ambassador, an inflight magazine; a colorful street scene for the cover of Minnesota Monthly, the magazine of Minnesota Public Radio; and a juggler for the cover of Credit Union Advantage magazine, among others.
Professionally, Mark is a font designer. He created Atari Classic, a free TrueType font family for modern computers that looks like the Atari 8-bit screen font. Today, you'll see Atari Classic used in many Atari emulators, web sites, the WUDSN IDE, and elsewhere.
This interview took place on April 15, 2021.
Mark's Atari reminisce blog post
A friend gets his first game published in Computers & Video Games! 6 other magazines this episode, and I try to dissect my first kernel game, Worm War I. Or is it 1? It’s a super important distinction.
The early days of Apple’s culture of secrecy. If you had people digging through the garbage bins outside your corporate headquarters, you would be paranoid too!
Original text from Macworld Magazine, November 1989.
Introductory news clip from The Computer Chronicles with bonus crazy background saxophone for some reason.
Hugo Fiennes quote from the Computer History Museum’s iPhone development team panel discussion.
Steve Jobs’ “Super Secret Apple Rumours” podcast from the MWSF 2006 GarageBand demo.
Alleged insider comments on the damage Apple’s internal secrecy has done to Mac OS X at Michael Tsai’s blog, one of the few Macintosh news sources worth reading these days.
James and John discuss eBay Finds: Elements of Design, PowerBook 180c, and Quadra 605. They look back at MacAddict April 2001, and news includes MacFilm, and the best and worst Apple products of all time.
Ann Lewin-Benham, Director of Capital Children's Museum
Ann Lewin-Benham was executive director of the Capital Children's Museum in Washington, D.C. The museum was home to the first public-access computer center in the nation’s capital, and indeed, one of the first in the United States. In 1981, Atari and Apple each donated dozens of computers to the museum. The exact number is unclear, but 30 is the number I've seen most often for Atari's contribution.
The computer lab was called The Future Center. There, the museum offered computer literacy classes for people of all ages, from Compu-Tots for preschoolers, to programming classes for adults, there was even a computer literacy session for members of Congress. It also used the lab for birthday parties. (Last year, I interviewed a woman who had her 8th birthday party at the museum.) The museum used more of its computers in its exhibit on communication. It established a software development laboratory, called Superboots, in which developers created custom softare for the museum, and one product that was released commercially: the graphics program PAINT!
In a 1982 article titled A Day At The Capital Children's Museum, Melanie Graves described the scene:
"My twelve-year-old friend Sarah and I went to the museum to explore the computers. There are several dozen computers scattered throughout the building which are used for exhibits, classroom teaching and the development of educational software...
A machine that calls itself "Wisecracker" is the noisest of the computers that beckon visitors to the Communication exhibit. "My-name- is-Wise-crack-er," it says in a monotone, "Come-type-to-me." This message repeats endlessly until someone types at the keyboard or turns off the computer. "Hello, how are you?" Sarah typed, and pressed the return key. "Hel-lo-how-are-you," the machine’s voice responded. Sarah typed for awhile longer and then proclaimed, "It sure is dumb, but its voice is kind of cute."
The computer next to Wisecracker has a data base program that asked Sarah her name, where she came from, and other questions. It informed her that she was the thirty-seventh person from Virginia to type in data that day... "Fifty-five percent of the people who came here were girls," she told me. Next to the data base, a computer is set up with a music program. Sarah pressed some random keys, causing notes to sound. At the same time, the letter names of the notes appeared on the keys of a piano that was displayed on the screen.
There is also a Teletext terminal that tells inquirers about weather predictions, and news releases, the latest acquisitions at the public library, local cultural events and whatever else has been entered into the data base for that day...
After playing with Teletext, Sarah and I went to the Future Center, a room equipped with twenty Atari 800s. On weekdays, the classroom is available to school groups ranging from prekindergarten to high school. On weekends, families arrive for courses in programming. Classes have also been created for working people, senior citizens, community groups, congressional spouses and other special interest groups. This summer more than sixty students from the Washington, D.C. public schools attended one of two free month-long computer camps at the museum."
This interview took place on April 2, 2021.
Ann's web site
Museum in Atari ConnectionVolume 1 Number 4
A Day At The Capital Children's Museum
C is easily one of the most influential programming languages in the world, and it's also one of the most popular languages in the world. Even after close to 50 years it remains in widespread and sustained use. In this series we are going to look at how C was developed, how it spread, and why it remains so relevant. To do that we need to start with background, and look at what exactly influenced C. This episode we are diving into some more ALGOL, CPL, BCPL, and eventually B.
James and Steve from Mac84 discuss eBay Finds: lot of eight Macs, Mac 400K external floppy drive with box, and Apple Shinjuku store opening pin. Steve gives us an update on his Mac84 projects, and news includes the 20th anniversary of Mac OS X, and the Mac SE/30 logic board recreation.
Steve Jobs returned to Apple in 1996. At the time, most people had a digital camera, like the Canon Elph that was released that year and maybe a digital video camera and probably a computer and about 16% of Americans had a cell phone at the time. Some had a voice recorder, a Diskman, some in the audio world had a four track machine. Many had CD players and maybe even a laser disk player.
But all of this was changing. Small, cheap microprocessors were leading to more and more digital products. The MP3 was starting to trickle around after being patented in the US that year. Netflix would be founded the next year, as DVDs started to spring up around the world. Ricoh, Polaroid, Sony, and most other electronics makers released digital video cameras. There were early e-readers, personal digital assistants, and even research into digital video recorders that could record your favorite shows so you could watch them when you wanted. In other words we were just waking up to a new, digital lifestyle. But the industries were fragmented.
Jobs and the team continued the work begun under Gil Amelio to reduce the number of products down from 350 to about a dozen. They made products that were pretty and functional and revitalized Apple. But there was a strategy that had been coming together in their minds and it centered around digital media and the digital lifestyle. We take this for granted today, but mostly because Apple made it ubiquitous.
Apple saw the iMac as the centerpiece for a whole new strategy. But all this new type of media and the massive files needed a fast bus to carry all those bits. That had been created back in 1986 and slowly improved on one the next few years in the form of IEEE 1394, or Firewire. Apple started it - Toshiba, Sony, Panasonic, Hitachi, and others helped bring it to device they made. Firewire could connect 63 peripherals at 100 megabits, later increased to 200 and then 400 before increasing to 3200. Plenty fast enough to transfer those videos, songs, and whatever else we wanted.
iMovie was the first of the applications that fit into the digital hub strategy. It was originally released in 1999 for the iMac DV, the first iMac to come with built-in firewire. I’d worked on Avid and SGI machines dedicated to video editing at the time but this was the first time I felt like I was actually able to edit video. It was simple, could import video straight from the camera, allow me to drag clips into a timeline and then add some rudimentary effects. Simple, clean, and with a product that looked cool. And here’s the thing, within a year Apple made it free. One catch. You needed a Mac.
This whole Digital Hub Strategy idea was coming together. Now as Steve Jobs would point out in a presentation about the Digital Hub Strategy at Macworld 2001, up to that point, personal computers had mainly been about productivity. Automating first the tasks of scientists, then with the advent of the spreadsheet and databases, moving into automating business and personal functions. A common theme in this podcast is that what drives computing is productivity, telemetry, and quality of life. The telemetry gains came with connecting humanity through the rise of the internet in the later 1990s. But these new digital devices were what was going to improve our quality of life. And for anyone that could get their hands on an iMac they were now doing so. But it still felt like a little bit of a closed ecosystem.
Apple released a tool for making DVDs in 2001 for the Mac G4, which came with a SuperDrive, or Apple’s version of an optical drive that could read and write CDs and DVDs. iDVD gave us the ability to add menus, slideshows (later easily imported as Keynote presentations when that was released in 2003), images as backgrounds, and more. Now we could take those videos we made and make DVDs that we could pop into our DVD player and watch. Families all over the world could make their vacation look a little less like a bunch of kids fighting and a lot more like bliss. And for anyone that needed more, Apple had DVD Studio Pro - which many a film studio used to make the menus for movies for years.
They knew video was going to be a thing because going back to the 90s, Jobs had tried to get Adobe to release Premiere for the iMac. But they’d turned him down, something he’d never forget. Instead, Jobs was able to sway Randy Ubillos to bring a product that a Macromedia board member had convinced him to work on called Key Grip, which they’d renamed to Final Cut. Apple acquired the source code and development team and released it as Final Cut Pro in 1999. And iMovie for the consumer and Final Cut Pro for the professional turned out to be a home run. But another piece of the puzzle was coming together at about the same time.
Jeff Robbin, Bill Kincaid, and Dave Heller built a tool called SoundJam in 1998. They had worked on the failed Copeland project to build a new OS at Apple and afterwards, Robbin made a great old tool (that we might need again with the way extensions are going) called Conflict Catcher while Kincaid worked on the drivers for a MP3 player called the Diamond Rio. He saw these cool new MP3 things and tools like Winamp, which had been released in 1997, so decided to meet back up with Robbin for a new tool, which they called SoundJam and sold for $50.
Just so happens that I’ve never met anyone at Apple that didn’t love music. Going back to Jobs and Wozniak. So of course they would want to do something in digital music. So in 2000, Apple acquired SoundJam and the team immediately got to work stripping out features that were unnecessary. They wanted a simple aesthetic. iMovie-esque, brushed metal, easy to use. That product was released in 2001 as iTunes.
iTunes didn’t change the way we consumed music.That revolution was already underway. And that team didn’t just add brushed metal to the rest of the operating system. It had begun with QuickTime in 1991 but it was iTunes through SoundJam that had sparked brushed metal.
SoundJam gave the Mac music visualizers as well. You know, those visuals on the screen that were generated by sound waves from music we were listening to. And while we didn’t know it yet, would be the end of software coming in physical boxes. But something else big. There was another device coming in the digital hub strategy. iTunes became the de facto tool used to manage what songs would go on the iPod, released in 2001 as well. That’s worthy of its own episode which we’ll do soon.
You see, another aspect about SoundJam is that users could rip music off of CDs and into MP3s. The deep engineering work done to get the codec into the system survives here and there in the form of codecs accessible using APIs in the OS. And when combined with spotlight to find music it all became more powerful to build playlists, embed metadata, and listen more insightfully to growing music libraries. But Apple didn’t want to just allow people to rip, find, sort, and listen to music. They also wanted to enable users to create music. So in 2002, Apple also acquired a company called Emagic. Emagic would become Logic Pro and Gerhard Lengeling would in 2004 release a much simpler audio engineering tool called Garage Band.
Digital video and video cameras were one thing. But cheap digital point and shoot cameras were everwhere all of a sudden. iPhoto was the next tool in the strategy, dropping in 2002 Here, we got a tool that could import all those photos from our cameras into a single library. Now called Photos, Apple gave us a taste of the machine learning to come by automatically finding faces in photos so we could easily make albums. Special services popped up to print books of our favorite photos. At the time most cameras had their own software to manage photos that had been developed as an after-thought. iPhoto was easy, worked with most cameras, and was very much not an after-thought.
Keynote came in 2003, making it easy to drop photos into a presentation and maybe even iDVD. Anyone who has seen a Steve Jobs presentation understands why Keynote had to happen and if you look at the difference between many a Power Point and Keynote presentation it makes sense why it’s in a way a bridge between the making work better and doing so in ways we made home better.
That was the same year that Apple released the iTunes Music Store. This seemed like the final step in a move to get songs onto devices. Here, Jobs worked with music company executives to be able to sell music through iTunes - a strategy that would evolve over time to include podcasts, which the moves effectively created, news, and even apps - as explored on the episode on the App Store. And ushering in an era of creative single-purpose apps that drove down the cost and made so much functionality approachable for so many.
iTunes, iPhoto, and iMovie were made to live together in a consumer ecosystem. So in 2003, Apple reached that point in the digital hub strategy where they were able to take our digital life and wrap them up in a pretty bow. They called that product iLife - which was more a bundle of these services, along with iDVD and Garage Band. Now these apps are free but at the time the bundle would set you back a nice, easy, approachable $49.
All this content creation from the consumer to the prosumer to the professional workgroup meant we needed more and more storage. According to the codec, we could be running at hundreds of megabytes per second of content. So Apple licensed the StorNext File System in 2004 to rescue a company called ADIC and release a 64-bit clustered file system over fibre channel. Suddenly all that new high end creative content could be shared in larger and larger environments. We could finally have someone cutting a movie in Final Cut then hand it off to someone else to cut without unplugging a firewire drive to do it. Professional workflows in a pure-Apple ecosystem were a thing.
Now you just needed a way to distribute all this content. So iWeb in 2004, which allowed us to build websites quickly and bring all this creative content in. Sites could be hosted on MobileMe or files uploaded to a web host via FTP. Apple had dabbled in web services since the 80s with AppleLink then eWorld then iTools, .Mac, and MobileMe, the culmination of the evolutions of these services now referred to as iCloud.
And iCloud now syncs documents and more. Pages came in 2005, Numbers came in 2007, and they were bundled with Keynote to become Apple iWork, allowing for a competitor of sorts to Microsoft Office. Later made free and ported to iOS as well. iCloud is a half-hearted attempt at keeping these synchronized between all of our devices.
Apple had been attacking the creative space from the bottom with the tools in iLife but at the top as well. Competing with tools like Avid’s Media Composer, which had been around for the Mac going back to 1989, Apple bundled the professional video products into a single suite called Final Cut Studio. Here, Final Cut Pro, Motion, DVD Studio Pro, Soundtrack Pro, Color (obtained when Apple acquired SiliconColor and renamed it from FinalTouch), Compressor, Cinema Tools, and Qmaster for distributing the processing power for the above tools came in one big old box. iMovie and Garage Band for the consumer market and Final Cut Studio and Logic for the prosumer to professional market. And suddenly I was running around the world deploying Xsan’s into video shops, corporate taking head editing studios, and ad agencies
Another place where this happened was with photos. Aperture was released in 2005 and offered the professional photographer tools to manage their large collection of images. And that represented the final pieces of the strategy. It continued to evolve and get better over the years. But this was one of the last aspects of the Digital Hub Strategy.
Because there was a new strategy underway. That’s the year Apple began the development of the iPhone. And this represents a shift in the strategy. Released in 2007, then followed up with the first iPad in 2010, we saw a shift from the growth of new products in the digital hub strategy to migrating them to the mobile platforms, making them stand-alone apps that could be sold on App Stores, integrated with iCloud, and killing off those that appealed to more specific needs in higher-end creative environments, like Aperture, which went ended in 2014, and integrating some into other products, like Color becoming a part of Final Cut Pro. But the income from those products has now been eclipsed by mobile devices. Because when we see the returns from one strategy begin to crest - you know, like when the entire creative industry loves you, it’s time to move to another, bolder strategy. And that mobile strategy opened our eyes to always online (or frequently online) synchronization between products and integration with products, like we get with Handoff and other technologies today.
In 2009 Apple acquired a company called Lala, which would later be added to iCloud - but the impact to the Digital Hub Strategy was that it paved the way for iTunes Match, a cloud service that allowed for syncing music from a local library to other Apple devices. It was a subscription and more of a stop-gap for moving people to a subscription to license music than a lasting stand-alone product. And other acquisitions would come over time and get woven in, such as Redmatia, Beats, and Swell.
Steve Jobs said exactly what Apple was going to do in 2001. In one of the most impressive implementations of a strategy, Apple had slowly introduced quality products that tactically ushered in a digital lifestyle since the late 90s and over the next few years. iMovie, iPhoto, iTunes, iDVD, iLife, and in a sign of the changing times - iPod, iPhone, iCloud. To signal the end of that era because it was by then ubiquitous. - then came the iPad. And the professional apps won over the creative industries. Until the strategy had been played out and Apple began laying the groundwork for the next strategy in 2005.
That mobile revolution was built in part on the creative influences of Apple. Tools that came after, like Instagram, made it even easier to take great photos, connect with friends in a way iWeb couldn’t - because we got to the point where “there’s an app for that”. And as the tools weren’t needed, Apple cancelled some one-by-one, or even let Adobe Premiere eclipse Final Cut in many ways. Because you know, sales of the iMac DV were enough to warrant building the product on the Apple platform and eventually Adobe decided to do that. Apple built many of these because there was a need and there weren’t great alternatives. Once there were great alternatives, Apple let those limited quantities of software engineers go work on other things they needed done. Like building frameworks to enable a new generation of engineers to build amazing tools for the platform!
I’ve always considered the release of the iPad to be the end of era where Apple was introducing more and more software. From the increased services on the server platform to tools that do anything and everything. But 2010 is just when we could notice what Jobs was doing. In fact, looking at it, we can easily see that the strategy shifted about 5 years before that. Because Apple was busy ushering in the next revolution in computing.
So think about this. Take an Apple, a Microsoft, or a Google. The developers of nearly every single operating system we use today. What changes did they put in place 5 years ago that are just coming to fruition today. While the product lifecycles are annual releases now, that doesn’t mean that when they have billions of devices out there that the strategies don’t unfold much, much slower. You see, by peering into the evolutions over the past few years, we can see where they’re taking computing in the next few years. Who did they acquire? What products will they release? What gaps does that create? How can we take those gaps and build products that get in front of them? This is where magic happens. Not when we’re too early like a General Magic was. But when we’re right on time. Unless we help set strategy upstream. Or, is it all chaos and not in the least bit predictable? Feel free to send me your thoughts!
And thank you…
ANTIC Episode 76 - The Bill Kendrick Show
In this episode of ANTIC The Atari 8-Bit Computer Podcast… Bill Kendrick gets more mentions than when he’s on the show, Kay discovers he owns more Atari disk drives than the rest of the Atari community combined, and we discuss all the news rocking the Atari 8-bit world.
Interview index: here
What We’ve Been Up To
YouTube videos this month
New at Archive.org
New at Github
James and John discuss eBay Finds: Original Mac Picasso box, Powerbook Duo 230 plus DuoDock, and Blue Dalmatian iMac. James recovers photos from an old Olympus digital camera, and news includes iMac Pro discontinued, classic icons on your iPhone, and Justin gets real.
ANTIC Interview 408 - David Maynard, Electronic Arts Worms?
David Maynard created the game/simulation "Worms?" Published by Electronic Arts in 1983, it was a launch title -- one of the five initial releases from the company. David, one of EA's first employees, wrote Worms? for the Atari 8-bit in FORTH. It was later ported to the Commodore 64.
Worms is an interactive version of Paterson's Worms, a family of cellular automata devised in 1971 by Mike Paterson and John Conway. It is an unusual program, in which the player teaches wormlike creatures how to move on a hexagonal grid -- what direction to move in various situations. The worm's goal is to to grow and survive, and to capture more space on the grid than its competitors. Up to four worms could play simultaneously, with any combination of human- and computer-controlled worms.
But the program's manual didn't tell you all that straight off. In fact, here's the first thing you saw after opening the package: "You will find detailed instructions enclosed. Do not read them. Instead, sit down and get started. Don't ask how. Just start. You know how these things work... Resist them. Do not read them for a very long time. In fact, do not read them until you know how the game works... Then never read the instructions. Innocence is bliss."
David also collaborated on Cut & Paste, a word processor published by Electronic Arts in 1984.
After our interview, David sent me a binder of Worms? development documentation and source code for Atari 8-bit and Commodore 64, all of which I have scanned and are available at Internet Archive and GitHub. The originals are going to the Strong Museum of Play, at David's request.
This interview took place on March 4, 2021.
Worms? source code for Atari 8-bit and Commodore 64
Scans of printed Worms? source code
Worms? Development Notes
Worms? at AtariMania
Michael Beeler's original Paterson's Worms paper
Martin Gardner's article in Scientific American
Darworms instructions and explanation
More Paterson's worm math
EA We See Farther poster
Can Kay and Carrington defeat the usurper king and restore peace to a fabled land? And more importantly, can they do it while turning into badgers and turtles and eels, oh my. Some claim there is more to being a knight than just eating ham and jam and Spam a lot. Surely you joust.
One of the great things about the modern Internet is the wide range of services and content available on it. You have news, email, games, even podcasts. And in each category you have a wide range of choices. This wide diversity makes the Internet so compelling and fun to explore. But what happens when you take away that freedom of choice? What would a network look like if there was only one news site, or one place to get eamil? Look no further than THE SOURCE. Formed in 1979 and marketed as the information utility for the information age, THE SOURCE looked remarkably like the Internet in a more closed-off format. The key word here is: looked.
James and John discuss eBay Finds: TAM Experience CDROM, original iMac, and an oddly configured Macintosh SE. Mark and Frank join to discuss the MacEffects Apple //e keyboard Kickstarter and an upcoming project, and news includes HomePod discontinued, obscure Apple products, and startup sound history.
James and John discuss eBay Finds: Apple "Freedom of Expression" printer poster, Mac lot, and Tecmar serial hard drive. John makes 1-bit wallpaper, and news includes Throwboy towels and a 3D-printed retro Mac enclosure.
Atari at the Science Fair: Michael Fripp: Silent E
An article was published in the Daily Press newspaper of Newport News, Virginia on February 13 1985, titled "Best in Show at Science Fair: Computer program helps young readers conquer the 'silent e' challenge'.
Two years ago Michael Fripp wanted to make sure his younger brother didn't face a hard time learning how to deal with the "silent e" principle in reading lessons. Putting his own Atari computer to work, Michael developed a fun, educational computer program designed to teach then 6-year-old Daniel how to successfully pronounce words like "cap," "tub" and "man" when an "e" is added to each.
"I remember the trouble I had with 'silent e' and didn't want him to have that trouble," says 13-year-old Michael, an eighth grader at Queens Lake Intermediate School. "There are lots of math but few English programs for computers. I hope to bridge that gap."
Michael went on to expand the "silent e" program, complete with more detailed instruction and graphics, through his computer science class at school and entered it as an exhibit in the York County Science Fair. Michael's educational reading program — "Silent E: A Program for K-3" — was judged best in show.
"We were pleased and surprised a computer program was picked because usually the judges pick pure science," says Carolyn Gaertner, who teaches math and computer science at the intermediate school.
Michael's computer program involves a simple story outline about an earthling named Tim and his spaceship landing on the planet EOP which is ruled by the Silent E's. There, Tim learns how the Silent E's simply and quickly turn words such as "pan" into "pane" with the addition of their favorite letter...
He has copyrighted the program and hopes to market it commercially. More than 100 hours of work have gone into the project...
"Computers are like a fever; they grow on you," says the young man. "I try to do a lot of programming at home but homework really limits me."
The large photograph accompanying the article shows young Michael, replete with calculator watch, in front of an Apple II computer, not an Atari.
I talked with Dr. Fripp to hear all about his program.
This interview took place on February 28, 2021.
Intro song: Silent E by Tom Lehrer
The Whole Earth ‘lectronic Link, or WELL, was started by Stewart Brand and Larry Brilliant in 1985, and is still available at well.com. We did an episode on Stewart Brand: Godfather of the Interwebs and he was a larger than life presence amongst many of the 1980s former hippies that were shaping our digital age. From his assistance producing The Mother Of All Demos to the Whole Earth Catalog inspiring Steve Jobs and many others to his work with Ted Nelson, there’s probably only a few degrees separating him from anyone else in computing.
Larry Brilliant is another counter-culture hero. He did work as a medical professional for the World Health Organization to eradicate smallpox and came home to teach at the University of Michigan. The University of Michigan had been working on networked conferencing since the 70s when Bob Parnes wrote CONFER, which would be used at Wayne State where Brilliant got his MD. But CONFER was a bit of a resource hog.
PicoSpan was written by Marcus Watts in 1983. Pico is a small text editor in many a UNIX variant and network is network. Why small, well, modems that dialed into bulletin boards were pretty slow back then.
Marcus worked at NETI, who then bought the rights for PicoSpan to take to market. So Brilliant was the chairman of NETI at the time and approached Brand about starting up a bulletin-board system (BBS). Brilliant proposed NETI would supply the gear and software and that Brand would use his, uh, brand - and Whole Earth following, to fill the ranks. Brand’s non-profit The Point Foundation would own half and NETI would own the other half.
It became an early online community outside of academia, and an important part of the rise of the splinter-nets and a holdout to the Internet. For a time, at least.
PicoSpan gave users conferences. These were similar to PLATO Notes files, where a user could create a conversation thread and people could respond. These were (and still are) linear and threaded conversations. Rather than call them Notes like PLATO did, PicSpan referred to them as “conferences” as “online conferencing” was a common term used to describe meeting online for discussions at the time. EIES had been around going back to the 1970s, so Brand had some ideas abut what an online community could be - having used it. Given the sharp drop in the cost of storage there was something new PicoSpan could give people: the posts could last forever. Keep in mind, the Mac still didn’t ship with a hard drive in 1984. But they were on the rise.
And those bits that were preserved were manifested in words. Brand brought a simple mantra: You Own Your Own Words. This kept the hands of the organization clean and devoid of liability for what was said on The WELL - but also harkened back to an almost libertarian bent that many in technology had at the time. Part of me feels like libertarianism meant something different in that era. But that’s a digression. Whole Earth Review editor Art Kleiner flew up to Michigan to get the specifics drawn up. NETI’s investment had about a quarter million dollar cash value. Brand stayed home and came up with a name. The Whole Earth ‘lectronic Link, or WELL.
The WELL was not the best technology, even at the time. The VAX was woefully underpowered for as many users as The WELL would grow to, and other services to dial into and have discussions were springing up. But it was one of the most influential of the time. And not because they recreated the extremely influential Whole Earth catalog in digital form like Brilliant wanted, which would have been similar to what Amazon reviews are like now probably. But instead, the draw was the people.
The community was fostered first by Matthew McClure, the initial director who was a former typesetter for the Whole Earth Catalog. He’d spent 12 years on a commune called The Farm and was just getting back to society. They worked out that they needed to charge $8 a month and another couple bucks an hour to make minimal a profit.
So McClure worked with NETI to get the Fax up and they created the first conference, General. Kevin Kelly from the Whole Earth Review and Brand would start discussions and Brand mentioned The WELL in some of his writings. A few people joined, and then a few more.
Others from The Farm would join him. Cliff Figallo, known as Cliff, was user 19 and John Coate, who went by Tex, came in to run marketing. In those first few years they started to build up a base of users.
It started with hackers and journalists, who got free accounts. And from there great thinkers joined up. People like Tom Mandel from Stanford Research Institute, or SRI. He would go on to become the editor of Time Online. His partner Nana. Howard Rheingold, who would go on to write a book called The Virtual Community. And they attracted more. Especially Dead Heads, who helped spread the word across the country during the heyday of the Grateful Dead.
Plenty of UNIX hackers also joined. After all, the community was finding a nexus in the Bay Area at the time. They added email in 1987 and it was one of those places you could get on at least one part of this whole new internet thing. And need help with your modem? There’s a conference for that. Need to talk about calling your birth mom who you’ve never met because you were adopted? There’s a conference for that as well. Want to talk sexuality with a minister? Yup, there’s a community for that. It was one of the first times that anyone could just reach out and talk to people. And the community that was forming also met in person from time to time at office parties, furthering the cohesion.
We take Facebook groups, Slack channels, and message boards for granted today. We can be us or make up a whole new version of us. We can be anonymous and just there to stir up conflict like on 4Chan or we can network with people in our industry like on LinkedIn. We can chat real time, which is similar to the Send option on The WELL. Or we can post threaded responses to other comments. But the social norms and trends were proving as true then as now. Communities grow, they fragment, people create problems, people come, people go. And sometimes, as we grow, we inspire.
Those early adopters of The WELL inspired Craig Newmark of Craigslist to the growing power of the Internet. And future developers of Apple. Hippies versus nerds but not really versus, but coming to terms with going from “computers are part of the military industrial complex keeping us down” philosophy to more of a free libertarian information superhighway that persisted for decades. The thought that the computer would set us free and connect the world into a new nation, as John Perry Barlow would sum up perfectly in “A Declaration of the Independence of Cyberspace”.
By 1990 people like Barlow could make a post on The WELL from Wyoming and have Mitch Kapor, the founder of Lotus, makers of Lotus 1-2-3 show up at his house after reading the post - and they could join forces with the 5th employee of Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Foundation. And as a sign of the times that’s the same year The WELL got fully connected to the Internet.
By 1991 they had grown to 5,000 subscribers. That was the year Bruce Katz bought NETI’s half of the well for $175,000. Katz had pioneered the casual shoe market, changing the name of his families shoe business to Rockport and selling it to Reebok for over $118 million.
The WELL had posted a profit a couple of times but by and large was growing slower than competitors. Although I’m not sure any o the members cared about that. It was a smaller community than many others but they could meet in person and they seemed to congeal in ways that other communities didn’t. But they would keep increasing in size over the next few years. In that time Fig replaced himself with Maurice Weitman, or Mo - who had been the first person to sign up for the service. And Tex soon left as well.
Tex would go to become an early webmaster of The Gate, the community from the San Francisco Chronicle. Fig joined AOL’s GNN and then became director of community at Salon.
But AOL. You see, AOL was founded in the same year. And by 1994 AOL was up to 1.25 million subscribers with over a million logging in every day. CompuServe, Prodigy, Genie, Dephi were on the rise as well. And The WELL had thousands of posts a day by then but was losing money and not growing like the others. But I think the users of the service were just fine with that. The WELL was still growing slowly and yet for many, it was too big. Some of those left. Some stayed. Other communities, like The River, fragmented off. By then, The Point Foundation wanted out so sold their half of The WELL to Katz for $750,000 - leaving Katz as the first full owner of The WELL.
I mean, they were an influential community because of some of the members, sure, but more because the quality of the discussions. Academics, drugs, and deeply personal information. And they had always complained about figtex or whomever was in charge - you know, the counter-culture is always mad at “The Management.” But Katz was not one of them. He honestly seems to have tried to improve things - but it seems like everything he tried blew up in his face.
So Katz further alienated the members and fired Mo and brought on Maria Wilhelm, but they still weren’t hitting that hyper-growth, with membership getting up to around 10,000 - but by then AOL was jumping from 5,000,000 to 10,000,000. But again, I’ve not found anyone who felt like The WELL should have been going down that same path. The subscribers at The WELL were looking for an experience of a completely different sort. By 1995 Gail Williams allowed users to create their own topics and the unruly bunch just kinda’ ruled themselves in a way. There was staff and drama and emotions and hurt feelings and outrage and love and kindness and, well, community.
By the late 90s, the buzz word at many a company were all about building communities, and there were indeed plenty of communities growing. But none like The WELL. And given that some of the founders of Salon had been users of The WELL, Salon bought The WELL in 1999 and just kinda’ let it fly under the radar. The influence continued with various journalists as members.
The web came. And the members of The WELL continued their community. Award winning but a snapshot in time in a way. Living in an increasingly secluded corner of cyberspace, a term that first began life in a present tense on The WELL, if you got it, you got it.
In 2012, after trying to sell The WELL to another company, Salon finally sold The WELL to a group of members who had put together enough money to buy it. And The WELL moved into the current, more modern form of existence.
To quote the site:
Welcome to a gathering that’s like no other. The WELL, launched back in 1985 as the Whole Earth ‘Lectronic Link, continues to provide a cherished watering hole for articulate and playful thinkers from all walks of life.
For more about why conversation is so treasured on The WELL, and why members of the community banded together to buy the site in 2012, check out the story of The WELL.
If you like what you see, join us!
It sounds pretty inviting. And it’s member supported. Like National Public Radio kinda’. In what seems like an antiquated business model, it’s $15 per month to access the community. And make no mistake, it’s a community.
You Own Your Own Words. If you pay to access a community, you don’t sign the ownership of your words away in a EULA. You don’t sign away rights to sell your data to advertisers along with having ads shown to you in increasing numbers in a hunt for ever more revenue. You own more than your words, you own your experience. You are sovereign.
This episode doesn’t really have a lot of depth to it. Just as most online forums lack the kind of depth that could be found on the WELL. I am a child of a different generation, I suppose.
Through researching each episode of the podcast, I often read books, conduct interviews (a special thanks to Help A Reporter Out), lurk in conferences, and try to think about the connections, the evolution, and what the most important aspects of each are. There is a great little book from Katie Hafner called The Well: A Story Of Love, Death, & Real Life. I recommend it. There’s also Howard Rheingold’s The Virtual Community and John Seabrook’s Deeper: Adventures on the Net. Oh, and From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, And the Rise of Digital Utopianism from Fred Turner and Siberia by Douglas Rushkoff. At a minimum, I recommend reading Katie Hafner’s wired article and then her most excellent book!
Oh, and to hear about other ways the 60s Counterculture helped to shape the burgeoning technology industry, check out What the Dormouse Said by John Markoff.
And The WELL comes up in nearly every book as one of the early commercial digital communities. It’s been written about in Wired, in The Atlantic, makes appearances in books like Broad Band by Claire Evans, and The Internet A Historical Encyclopedia.
The business models out there to build and run and grow a company have seemingly been reduced to a select few. Practically every online community has become free with advertising and data being the currency we parlay in exchange for a sense of engagement with others.
As network effects set in and billionaires are created, others own our words. They think the lifestyle business is quaint - that if you aren’t outgrowing a market segment that you are shrinking. And a subscription site that charges a monthly access fee to cgi code with a user experience that predates the UX field on the outside might affirm that philosophy -especially since anyone can see your real name. But if we look deeper we see a far greater truth: that these barriers keep a small corner of cyberspace special - free from Russian troll farms and election stealing and spam bots. And without those distractions we find true engagement. We find real connections that go past the surface. We find depth. It’s not lost after all.
Thank you for being part of this little community. We are so lucky to have you. Have a great day.
Most early stage startups have, and so seemingly need, heroic efforts from brilliant innovators working long hours to accomplish impossible goals. Tesla certainly had plenty of these as an early stage startup and continues to - as do the other Elon Musk startups. He seems to truly understand and embrace that early stage startup world and those around him seem to as well.
As a company grows we have to trade those sprints of heroic output for steady streams of ideas and quality. We have to put development on an assembly line. Toyota famously put the ideas of Deming and other post-World War II process experts into their production lines and reaped big rewards - becoming the top car manufacturer in the process.
Not since the Ford Model T birthed the assembly line had auto makers seen as large an increase in productivity. And make no mistake, technology innovation is about productivity increases. We forget this sometimes when young, innovative startups come along claiming to disrupt industries. Many of those do, backed by seemingly endless amounts of cash to get them to the next level in growth. And the story of Tesla is as much about productivity in production as it is about innovative and disruptive ideas. And the story is as much about a cult of personality as it is about massive valuations and quality manufacturing.
The reason we’re covering Tesla in a podcast about the history of computers is at the heart of it, it’s a story about the startup culture clashing head-on with decades-old know-how in an established industry. This happens with nearly every new company: there are new ideas, an organization is formed to support the new ideas, and as the organization grows, the innovators are forced to come to terms with the fact that they have greatly oversimplified the world.
Tesla realized this. Just as Paypal had realized it before. But it took a long time to get there. The journey began much further back. Rather than start with the discovery of the battery or the electric motor, let’s start with the GM Impact. It was initially shown off at the 1990 LA Auto Show. It’s important because Alan Cocconi was able to help take some of what GM learned from the 1987 World Solar Challenge race using the Sunraycer and start putting it into a car that they could roll off the assembly lines in the thousands.
They needed to do this because the California Air Resources Board, or CARB, was about to require fleets to go 2% zero-emission, or powered by something other than fossil fuels, by 1998 with rates increasing every few years after that. And suddenly there was a rush to develop electric vehicles. GM may have decided that the Impact, later called the EV1, proved that the electric car just wasn’t ready for prime time, but the R&D was accelerating faster than it ever had before then.
That was the same year that NuvoMedia was purchased by Gemstar-TVGuide International for $187 million. They’d made the Rocket eBook e-reader. That’s important because the co-founders of that company were Martin Eberhard, a University of Illinois Champaign Urbana grad, and Marc Tarpenning.
Alan Cocconi was able to take what he’d learned and form a new company, called AC Propulsion. He was able to put together a talented group and they built a couple of different cars, including the tZero. Many of the ideas that went into the first Tesla car came from the tZero, and Eberhard and Tarpenning tried to get Tom Gage and Cocconi to take their tZero into production. The tZero was a sleek sportscar that began life powered by lead-acid batteries that could get from zero to 60 in just over four seconds and run for 80-100 miles. They used similar regenerative braking that can be found in the Prius (to oversimplify it) and the car took about an hour to charge. The cars were made by hand and cost about $80,000 each. They had other projects so couldn’t focus on trying to mass produce the car. As Tesla would learn later, that takes a long time, focus, and a quality manufacturing process.
While we think of Elon Musk as synonymous with Tesla Motors, it didn’t start that way. Tesla Motors was started in 2003 by Eberhard, who would serve as Tesla’s first chief executive officer (CEO) and Tarpenning, who would become the first chief financial officer (CFO), when AC Propulsion declined to take that tZero to market. Funding for the company was obtained from Elon Musk and others, but they weren’t that involved at first. Other than the instigation and support. It was a small shop, with a mission - to develop an electric car that could be mass produced.
The good folks at AC Propulsion gave Eberhard and Tarpenning test drives in the tZero, and even agreed to license their EV Power System and reductive charging patents. And so Tesla would develop a motor and work on their own power train so as not to rely on the patents from AC Propulsion over time. But the opening Eberhard saw was in those batteries. The idea was to power a car with battery packs made of lithium ion cells, similar to those used in laptops and of course the Rocket eBooks that NuvoMedia had made before they sold the company. They would need funding though. So Gage was kind enough to put them in touch with a guy who’d just made a boatload of money and had also recommended commercializing the car - Elon Musk.
This guy Musk, he’d started a space company in 2002. Not many people do that. And they’d been trying to buy ICBMs in Russia and recruiting rocket scientists. Wild. But hey, everyone used PayPal, where he’d made his money. So cool. Especially since Eberhard and Tarpenning had their own successful exit.
Musk signed on to provide $6.5 million in the Tesla Series A and they brought in another $1m to bring it to $7.5 million. Musk became the chairman of the board and they expanded to include Ian Wright during the fundraising and J.B. Straubel in 2004. Those five are considered the founding team of Tesla.
They got to work building up a team to build a high-end electric sports car. Why? Because that’s one part of the Secret Tesla Motors Master Plan. That’s the title of a blog post Musk wrote in 2006. You see, they were going to build a high-end hundred thousand dollar plus car. But the goal was to develop mass market electric vehicles that anyone could afford. They unveiled the prototype in 2006, selling out the first hundred in three weeks.
Meanwhile, Elon Musk’s cousins, Peter and Lyndon Rive started a company called SolarCity in 2006, which Musk also funded. They merged with Tesla in 2016 to provide solar roofs and other solar options for Tesla cars and charging stations. SolarCity, as with Tesla, was able to capitalize on government subsidies and growing to become the third most solar installations in homes with just a little over 6 percent of the market share.
But we’re still in 2006. You see, they won a bunch of awards, got a lot of attention - now it was time to switch to general production. They worked with Lotus, a maker of beautiful cars that make up for issues with quality production in status, beauty, and luxury. They started with the Lotus Elise, increased the wheelbase and bolstered the chassis so it could hold the weight of the batteries. And they used a carbon fiber composite for the body to bring the weight back down.
The process was slower than it seems anyone thought it would be. Everyone was working long hours, and they were burning through cash. By 2007, Eberhard stepped down as CEO. Michael Marks came in to run the company and later that year Ze’ev Drori was made CEO - he has been given the credit by many for tighting things up so they could get to the point that they could ship the Roadster. Tarpenning left in 2008. As did others, but the brain drain didn’t seem all that bad as they were able to ship their first car in 2008, after ten engineering prototypes.
The Roadster finally shipped in 2008, with the first car going to Musk. It could go for 245 miles a charge. 0 to 60 in less than 4 seconds. A sleek design language. But it was over $100,000. They were in inspiration and there was a buzz everywhere. The showmanship of Musk paired with the beautiful cars and the elites that bought them drew a lot of attention. As did the $1 million in revenue profit they earned in July of 2009, off 109 cars shipped.
But again, burning through cash. They sold 10% of the company to Daimler AG and took a $465 million loan from the US Department of Energy. They were now almost too big to fail.
They hit 1,000 cars sold in early 2010. They opened up to orders in Canada. They were growing. But they were still burning through cash. It was time to raise some serious capital. So Elon Musk took over as CEO, cut a quarter of the staff, and Tesla filed for an IPO in 2010, raising over $200 million. But there was something special in that S-1 (as there often is when a company opens the books to go public): They would cease production of the Roadster making way for the next big product.
Tesla cancelled the Roadster in 2012. By then they’d sold just shy of 2,500 Roadsters and been thinking through and developing the next thing, which they’d shown a prototype of in 2011. The Model S started at $76,000 and went into production in 2012. It could go 300 miles, was a beautiful car, came with a flashy tablet-inspired 17 inch display screen on the inside to replace buttons. It was like driving an iPad. Every time I’ve seen another GPS since using the one in a Model S, I feel like I’ve gotten in a time machine and gone back a decade.
But it had been announced in 2007to ship in 2009. And then the ship date dropped back to 2011 and 2012. Let’s call that optimism and scope creep. But Tesla has always eventually gotten there. Even if the price goes up. Such is the lifecycle of all technology. More features, more cost. There are multiple embedded Ubuntu operating systems controlling various parts of car, connected on a network in the car. It’s a modern marvel and Tesla was rewarded with tons of awards and, well, sales.
Charging a car that runs on batteries is a thing. So Tesla released the Superchargers in 2012, shipping 7 that year and growing slowly until now shipping over 2,500 per quarter. Musk took some hits because it took longer than anticipated to ship them, then to increase production, then to add solar. But at this point, many are solar and I keep seeing panels popping up above the cars to provide shade and offset other forms of powering the chargers. The more ubiquitous chargers become, the more accepting people will be of the cars.
Tesla needed to produce products faster. The Nevada Gigafactory was begun in 2013, to mass produce battery packs and components. Here’s one of the many reason for the high-flying valuation Tesla enjoys: it would take dozens if not a hundred factories like this to transition to sustanable energy sources. But it started with a co-investment between Tesla and Panasonic, with the two dumping billions into building a truly modern factory that’s now pumping out close tot he goal set back in 2014. As need increased, Gigafactories started to crop up with Gigafactory 5 being built to supposedly go into production in 2021 to build the Semi, Cybertruck (which should begin production in 2021) and Model Y. Musk first mentioned the truck in 2012 and projected a 2018 or 2019 start time for production. Close enough.
Another aspect of all that software is that they can get updates over the air. Tesla released Autopilot in 2014. Similar to other attempts to slowly push towards self-driving cars, Autopilot requires the driver to stay alert, but can take on a lot of the driving - staying within the lines on the freeway, parking itself, traffic-aware cruise control, and navigation. But it’s still the early days for self-driving cars and while we make think that because the number of integrated circuits doubles every year that it paves the way to pretty much anything, no machine learning project I’ve ever seen has gone as fast as we want because it takes years to build the appropriate algorithms and then rethink industries based on the impact of those. But Tesla, Google through Waymo, and many others have been working on it for a long time (hundreds of years in startup-land) and it continues to evolve.
By 2015, Tesla had sold over 100,000 cars in the life of the company. They released the Model X that year, also in 2015. This was their first chance to harness the power of the platform - which in the auto industry is when there are multiple cars of similar size and build. Franz von Holzhausen designed it and it is a beautiful car, with falcon-wing doors, up to a 370 mile range on the battery and again with the Autopilot. But harnessing the power of the platform was a challenge. You see, with a platform of cars you want most of the parts to be shared - the differences are often mostly cosmetic. But the Model X only shared a little less than a third of the parts of the Model S.
But it’s yet another technological marvel, with All Wheel Drive as an option, that beautiful screen, and check this out - a towing capacity of 5,000 pounds - for an electric automobile!
By the end of 2016, they’d sold over 25,000. To a larger automaker that might seem like nothing, but they’d sell over 10,000 in every quarter after that. And it would also become the platform for a mini-bus. Because why not. So they’d gone lateral in the secret plan but it was time to get back at it. This is where the Model 3 comes in.
The Model 3 was released in 2017 and is now the best-selling electric car in the history of the electric car. The Model 3 was first shown off in 2016 and within a week, Tesla had taken over 300,000 reservations. Everyone I talked to seemed to want in on an electric car that came in at $35,000. This was the secret plan. That $35,000 model wouldn’t be available until 2019 but they started cranking them out. Production was a challenge with Musk famously claiming Tesla was in “Production Hell” and sleeping on an air mattress at the factory to oversee the many bottlenecks that came. Musk thought they could introduce more robotics than they could and so they’ slowly increased production to first a few hundred per week then a few thousand until finally almost hitting that half a million mark in 2020.
This required buying Grohmann Engineering in 2017, now called Tesla Advanced Automation Germany - pumping billions into production. But Tesla added the Model Y in 2020, launching a crossover on the Model 3 platform, producing over 450,000 of them. And then of course they decided to the Tesla Semi, selling for between $150,000 and $200,000. And what’s better than a Supercharger to charge those things? A Megacharger. As is often the case with ambitious projects at Tesla, it didn’t ship in 2020 as projected but is now supposed to ship, um, later.
Tesla also changed their name from Tesla Motors to Tesla, Inc. And if you check out their website today, solar roofs and solar panels share the top bar with the Models S, 3, X, and Y. SolarCity and batteries, right?
Big money brings big attention. Some good. Some bad. Some warranted. Some not. Musk’s online and sometimes nerd-rockstar persona was one of the most valuable assets at Tesla - at least in the fundraising, stock pumping popularity contest that is the startup world. But on August 7, 2018, he tweeted “Am considering taking Tesla private at $420. Funding secured.” The SEC would sue him for that, causing him to step down as chairman for a time and limit his Twitter account. But hey, the stock jumped up for a bit.
But Tesla kept keeping on, slowly improving things and finally hit about the half million cars per year mark in 2020. Producing cars has been about quality for a long time. And it needs to be with people zipping around as fast as we drive - especially on modern freeways. Small batches of cars are fairly straight-forward. Although I could never build one.
The electric car is good for the environment, but the cost to offset carbon for Tesla is still far greater than, I don’t know, making a home more energy efficient. But the improvements in the technology continue to increase rapidly with all this money and focus being put on them. And the innovative designs that Tesla has deployed has inspired others, which often coincides with the rethinking of entire industries.
But there are tons of other reasons to want electric cars. The average automobile manufactured these days has about 30,000 parts. Teslas have less than a third of that. One hopes that will some day be seen in faster and higher quality production.
They managed to go from producing just over 18,000 cars in 2015 to over 26,000 in 2016 to over 50,000 in 2017 to the 190,000s in 2018 and 2019 to a whopping 293,000 in 2020. But they sold nearly 500,000 cars in 2020 and seem to be growing at a fantastic clip. Here’s the thing, though. Ford exceeded half a million cars in 1916. It took Henry Ford from 1901 to 1911 to get to producing 34,000 cars a year but only 5 more years to hit half a million. I read a lot of good and a lot of bad things about Tesla. Ford currently has a little over a 46 and a half billion dollar market cap. Tesla’s crested at nearly $850 billion and has since dropped to just shy of 600.
Around 64 million cars are sold each year. Volkswagen is the top, followed by Toyota. Combined, they are worth less than Tesla on paper despite selling over 20 times the number of cars. If Tesla was moving faster, that might make more sense. But here’s the thing. Tesla is about to get besieged by competitors at every side. Nearly every category of car has an electric alternative with Audi, BMW, Volvo, and Mercedes releasing cars at the higher ends and on multiple platforms. Other manufacturers are releasing cars to compete with the upper and lower tiers of each model Tesla has made available. And miniature cars, scooters, bikes, air taxis, and other modes of transportation are causing us to rethink the car. And multi-tenancy of automobiles using ride sharing apps and the potential that self driving cars can have on that are causing us to rethink automobile ownership.
All of this will lead some to rethink that valuation Tesla enjoyed. But watching the moves Tesla makes and scratching my head over some certainly makes me think to never under, or over-estimate Tesla or Musk. I don’t want anything to do with Tesla Stock. Far too weird for me to grok. But I do wish them the best. I highly doubt the state of electric vehicles and the coming generational shifts in transportation in general would be where they are today if Tesla hadn’t done all the good and bad that they’ve done. They deserve a place in the history books when we start looking back at the massive shifts to come. In the meantime, I’l’ just call this episode part 1 and wait to see if Tesla matches Ford production levels some day, crashes and burns, gets acquired by another company, or who knows, packs up and heads to Mars.
Is it too late for Apple’s lightweight laptops? Steven Levy’s summary of the awkward PowerBook Duo situation.
Original text from Macworld Magazine, December 1993.
We can look around at distributed banking, crypto-currencies, Special Purpose Acquisition Companies, and so many other innovative business strategies as new and exciting and innovative. And they are. But paving the way for them was simplifying online payments to what I’ve heard Elon Musk call just some rows in a database.
Peter Thiel, Max Levchin, and former Netscaper Luke Nosek had this idea in 1998. Levchin and Nosek has worked together on a startup called SponsorNet New Media while at the University of Illinois Champagne-Urbana where PLATO and Mosaic had come out of. And SponsorNet was supposed to sell online banner ads but would instead be one of four failed startups before zeroing in on this new thing, where they would enable digital payments for businesses and make it simple for consumers to buy things online. They called the company Confinity and setup shop in beautiful Mountain View, California.
It was an era when a number of organizations were doing things in taking payments online that weren’t so great. Companies would cache credit card numbers on sites, many had weak security, and the rush to sell everything in the bubble forming around dot-coms fueled a knack for speed over security, privacy, or even reliability.
Confinity would store the private information in its own banking vaults, keep it secure, and provide access to vendors - taking a small charge per-transaction. Where large companies had been able to build systems to take online payments, now small businesses and emerging online stores could compete with the big boys. Thiel and Levchin had hit on something when they launched a service called PayPal, to provide a digital wallet and enable online transactions. They even accepted venture funding, taking $3 million from banks like Deutsche Bank over Palm Pilots. One of those funders was Nokia, investing in PayPal expanding into digital services for the growing mobile commerce market. And by 2000 they were up to 1,000,000 users.
They saw an opening to make a purchase from a browser on a phone or a browser or app on a cell phone using one of those new smart phone ideas. And they were all rewarded with over 10 million people using the site in just three short years, processing a whopping $3 billion in transactions.
Now this was the heart of the dot-com bubble. In that time, Elon Musk managed to sell his early startup Zip2, which made city guides on the early internet, to Compaq for around $300 million, pocketing $22 million for himself. He parlayed that payday into X.com, another online payment company. X.com exploded to over 200,000 customers quickly and as happens frequently with rapid acceleration, a young Musk found himself with a new boss - Bill Harris, the former CEO of Intuit.
And they helped invent many of the ways we do business online at that time. One of my favorite of Levchin’s contributions to computing, the Gausebeck-Levchin test, is one of the earliest implementations of what we now call CAPTCHA - you know when you’re shown a series of letters and asked to type them in to eliminate bots.
Harris helped the investors de-risk by merging with Confinity to form X.com. Peter Thiel and Elon Musk are larger than life minds in Silicon Valley. The two were substantially different. Musk took on the CEO role but Musk and Thiel were at heads. Thiel believed in a Linux ecosystem and Musk believed in a Windows ecosystem. Thiel wanted to focus on money transfers, similar to the PayPal of today. Given that those were just rows in a database, it was natural that that kind of business would become a red ocean and indeed today there are dozens of organizations focused on it. But Paypal remains the largest. So Musk also wanted to become a full online banking system - much more ambitious. Ultimately Thiel won and assumed the title of CEO.
They remained a money transmitter and not a full bank. This means they keep funds that have been sent and not picked up, in an interest bearing account at a bank.
They renamed the company to PayPal in 2001 and focused on taking the company public, with an IPO as PYPL in 2002. The stock shot up 50% in the first day of trading, closing at $20 per share. Yet another example of the survivors of the dot com bubble increasing the magnitude of valuations. By then, most eBay transactions accepted PayPal and seeing an opportunity, eBay acquired PayPal for $1.5 billion later in 2002. Suddenly PayPal was the default option for closed auctions and would continue their meteoric rise. Musk is widely reported to have made almost $200 million when eBay bought PayPal and Thiel is reported to have made over $50 million.
Under eBay, PayPal would grow and as with most companies that IPO, see a red ocean form in their space. But they brought in people like Ken Howery, who serve as the VP of corporate development, would later cofound investment firm Founders Fund with Thiel, and then become the US Ambassador to Sweden under Trump. And he’s the first of what’s called the PayPal Mafia, a couple dozen extremely influential personalities in tech.
By 2003, PayPal had become the largest payment processor for gambling websites. Yet they walked away from that business to avoid some of the complicated regulations until various countries that could verify a license for online gambling venues.
In 2006 they added security keys and moved to sending codes to phones for a second factor of security validation. In 2008 they bought Fraud Sciences to gain access to better online risk management tools and Bill Me Later.
As the company grew, they setup a company in the UK and began doing business internationally. They moved their EU presence to Luxembourg 2007. They’ve often found themselves embroiled in politics, blocking the any political financing accounts, Alex Jones show InfoWars, and one of the more challenging for them, WikiLeaks in 2010. This led to them being attacked by members of Anonymous for a series of denial of service attacks that brought the PayPal site down.
OK, so that early CAPTCHA was just one way PayPal was keeping us secure. It turns out that moving money is complicated, even the $3 you paid for that special Golden Girls t-shirt you bought for a steal on eBay. For example, US States require reporting certain transactions, some countries require actual government approval to move money internationally, some require a data center in the country, like Turkey. So on a case-by-case basis PayPal has had to decide if it’s worth it to increase the complexity of the code and spend precious development cycles to support a given country. In some cases, they can step in and, for example, connect the Baidu wallet to PayPal merchants in support of connecting China to PayPal.
They were spun back out of eBay in 2014 and acquired Xoom for $1 billion in 2015, iZettle for $2.2 billion, who also does point of sales systems. And surprisingly they bought online coupon aggregator Honey for $4B in 2019. But their best acquisition to many would be tiny app payment processor Venmo for $26 million. I say this because a friend claimed they prefer that to PayPal because they like the “little guy.”
Out of nowhere, just a little more than 20 years ago, the founders of PayPal and they and a number of their initial employees willed a now Fortune 500 company into existence. While they were growing, they had to learn about and understand so many capital markets and regulations. This sometimes showed them how they could better invest money. And many of those early employees went on to have substantial impacts in technology. That brain drain helped fuel the Web 2.0 companies that rose.
One of the most substantial ways was with the investment activities. Thiel would go on to put $10 million of his money into Clarium Capital Management, a hedge fund, and Palantir, a big data AI company with a focus on the intelligence industry, who now has a $45 billion market cap. And he funded another organization who doesn’t at all use our big private data for anything, called Facebook. He put half a million into Facebook as an angel investor - an investment that has paid back billions. He’s also launched the Founders Fund, Valar Venture, and is a partner at Y Combinator, in capacities where he’s funded everyone from LinkedIn and Airbnb to Stripe to Yelp to Spotify, to SpaceX to Asana and the list goes on and on and on.
Musk has helped take so many industries online. Why not just apply that startup modality to space - so launched SpaceX and to cars, so helped launch (and backed financially) Tesla and solar power, so launched Solar City and building tunnels so launched The Boring Company. He dabbles in Hyperloops (thus the need for tunnels) and OpenAI and well, whatever he wants. He’s even done cameos in movies like Iron Man. He’s certainly a personality.
Max Levchin would remain the CTO and then co-found and become the CEO of Affirm, a public fintech company.
David Sacks was the COO at PayPal and founded Yammer. Roelof Botha is the former CFO at PayPal who became a partner at Sequoia Capital, one of the top venture capital firms. Yishan Wong was an engineering manager at PayPal who became the CEO of Reddit.
Steve Chen left to join Facebook but hooked back up with Jawed Karim for a new project, who he studied computer science at the University of Illinois at Champaign-Urbana with. They were joined by Chad Hurley, who had created the original PayPal logo, to found YouTube. They sold it to Google for $1.65 billion in 2006. Hurley now owns part of the Golden State Warriors, the MLS Los Angeles team, and Leeds United.
Reid Hoffman was another COO at PayPal, who Thiel termed the “firefighter-in-chief” and left to found LinkedIn. After selling LinkedIn to Microsoft for over $26 billion he become a partner at venture capital firm, Greylock Partners.
Jeremy Stoppelman and Russel Simmons co-founded Yelp with $1 million in funding from Max Levchin, taking the company public in 2011. And the list goes on.
PayPal paved the way for small transactions on the Internet. A playbook repeated in different parts of the sector by the likes of Square, Stripe, Dwolla, Due, and many others - including Apple Pay, Amazon Payments, and Google Wallet. We live in an era now, where practically every industry has been taken online. Heck, even cars. In the next episode we’ll look at just that, exploring the next steps in Elon Musk’s career after leaving PayPal.
Heidi Brumbaugh, Antic and START Magazines
Heidi Brumbaugh worked at Antic Publishing, where she started off as editorial clerk, then was promoted to editorial assistant, for both Antic magazine and START magazine, then was programs editor for START Magazine. She wrote many articles for Antic and START, including three programs for the 8-bits published in Antic: Red, White and Blue, a board game; Hot and Cold, a Master Mind-type game; and Antic Prompter, a teleprompter application.
She met her husband through Antic publishing, START author and programmer Jim Kent, who also created the Cyber Paint program for Atari ST.
This interview took place on February 28, 2021.
List of Antic articles by Heidi Brumbaugh
List of START articles by Heidi Brumbaugh
Heidi's programs at Atarimania
Heidi's review of Linkword Languages
Cyber Paint by Jim Kent
2013 Interview with Jim Capparell, Founder of Antic Magazine
PLATO (Programmed Logic for Automatic Teaching Operations) was an educational computer system that began at the University of Illinois Champaign Urbana in 1960 and ran into the 2010s in various flavors.
Wait, that’s an oversimplification. PLATO seemed to develop on an island in the corn fields of Champaign Illinois, and sometimes precedes, sometimes symbolizes, and sometimes fast-follows what was happening in computing around the world in those decades.
To put this in perspective - PLATO began on ILLIAC in 1960 - a large classic vacuum tube mainframe. Short for the Illinois Automatic Computer, ILLIAC was built in 1952, around 7 years after ENIAC was first put into production. As with many early mainframe projects PLATO 1 began in response to a military need. We were looking for new ways to educate the masses of veterans using the GI Bill. We had to stretch the reach of college campuses beyond their existing infrastructures.
Computerized testing started with mechanical computing, got digitized with the introduction of Scantron by IBM in 1935, and a number of researchers were looking to improve the consistency of education and bring in new technology to help with quality teaching at scale. The post-World War II boom did this for industry as well. Problem is, following the launch of Sputnik by the USSR in 1957, many felt the US began lagging behind in education. So grant money to explore solutions flowed and CERL was able to capitalize on grants from the US Army, Navy, and Air Force. By 1959, physicists at Illinois began thinking of using that big ILLIAC machine they had access to. Daniel Alpert recruited Don Bitzer to run a project, after false starts with educators around the campus.
Bitzer shipped the first instance of PLATO 1 in 1960. They used a television to show images, stored images in Raytheon tubes, and a make-shift keyboard designed for PLATO so users could provide input in interactive menus and navigate. They experimented with slide projectors when they realized the tubes weren’t all that reliable and figured out how to do rudimentary time sharing, expanding to a second concurrent terminal with the release of PLATO II in 1961.
Bitzer was a classic Midwestern tinkerer. He solicited help from local clubs, faculty, high school students, and wherever he could cut a corner to build more cool stuff, he was happy to move money and resources to other important parts of the system. This was the age of hackers and they hacked away. He inspired but also allowed people to follow their own passions. Innovation must be decentralized to succeed.
They created an organization to support PLATO in 1966 - as part of the Graduate College. CERL stands for the Computer-Based Education Research Laboratory (CERL). Based on early successes, they got more and more funding at CERL. Now that we were beyond a 1:1 ratio of users to computers and officially into Time Sharing - it was time for Plato III.
There were a number of enhancements in PLATO III. For starters, the system was moved to a CDC 1604 that CEO of Control Data William Norris donated to the cause - and expanded to allow for 20 terminals. But it was complicated to create new content and the team realized that content would be what drove adoption. This was true with applications during the personal computer revolution and then apps in the era of the App Store as well. One of many lessons learned first on PLATO.
Content was in the form of applications that they referred to as lessons. It was a teaching environment, after all. They emulated the ILLIAC for existing content but needed more. People were compiling applications in a complicated language. Professors had day jobs and needed a simpler way to build content. So Paul Tenczar on the team came up with a language specifically tailored to creating lessons. Similar in some ways to BASIC, it was called TUTOR.
Tenczar released the manual for TUTOR in 1969 and with an easier way of getting content out, there was an explosion in new lessons, and new features and ideas would flourish. We would see simulations, games, and courseware that would lead to a revolution in ideas. In a revolutionary time.
The number of hours logged by students and course authors steadily increased. The team became ever more ambitious. And they met that ambition with lots of impressive achievements.
Now that they were comfortable with the CDC 1604 they new that the new content needed more firepower. CERL negotiated a contract with Control Data Corporation (CDC) in 1970 to provide equipment and financial support for PLATO. Here they ended up with a CDC Cyber 6400 mainframe, which became the foundation of the next iteration of PLATO, PLATO IV.
PLATO IV was a huge leap forward on many levels. They had TUTOR but with more resources could produce even more interactive content and capabilities. The terminals were expensive and not so scalable. So in preparation for potentially thousands of terminals in PLATO IV they decided to develop their own.
This might seem a bit space age for the early 1970s, but what they developed was a touch flat panel plasma display. It was 512x512 and rendered 60 lines per second at 1260 baud. The plasma had memory in it, which was made possible by the fact that they weren’t converting digital signals to analog, as is done on CRTs. Instead, it was a fully digital experience. The flat panel used infrared to see where a user was touching, allowing users some of their first exposure to touch screens. This was a grid of 16 by 16 rather than 512 but that was more than enough to take them over the next decade.
The system could render basic bitmaps but some lessons needed more rich, what we might call today, multimedia. The Raytheon tubes used in previous systems proved to be more of a CRT technology but also had plenty of drawbacks. So for newer machines they also included a microfiche machine that produced images onto the back of the screen.
The terminals were a leap forward. There were other programs going on at about the same time during the innovative bursts of PLATO, like the Dartmouth Time Sharing System, or DTSS, project that gave us BASIC instead of TUTOR. Some of these systems also had rudimentary forms of forums, such as EIES and the emerging BBS Usenet culture that began in 1973. But PLATO represented a unique look into the splintered networks of the Time Sharing age.
Combined with the innovative lessons and newfound collaborative capabilities the PLATO team was about to bring about something special. Or lots of somethings that culminated in more. One of those was Notes.
Talkomatic was created by Doug Brown and David R. Woolley in 1973. Tenczar asked the 17-year old Woolley to write a tool that would allow users to report bugs with the system. There was a notes file that people could just delete. So they added the ability for a user to automatically get tagged in another file when updating and store notes. He expanded it to allow for 63 responses per note and when opened, it showed the most recent notes. People came up with other features and so a menu was driven, providing access to System Announcements, Help Notes, and General Notes.
But the notes were just the start. In 1973, seeing the need for even more ways to communicate with other people using the system, Doug Brown wrote a prototype for Talkomatic. Talkomatic was a chat program that showed when people were typing. Woolley helped Brown and they added channels with up to five people per channel. Others could watch the chat as well. It would be expanded and officially supported as a tool called Term-Talk. That was entered by using the TERM key on a console, which allowed for a conversation between two people. You could TERM, or chat a person, and then they could respond or mark themselves as busy.
Because the people writing this stuff were also the ones supporting users, they added another feature, the ability to monitor another user, or view their screen. And so programmers, or consultants, could respond to help requests and help get even more lessons going. And some at PLATO were using ARPANET, so it was only a matter of time before word of Ray Tomlinson’s work on electronic mail leaked over, leading to the 1974 addition of personal notes, a way to send private mail engineered by Kim Mast.
As PLATO grew, the amount of content exploded. They added categories to Notes in 1975 which led to Group Notes in 1976, and comments and linked notes and the ability to control access.
But one of the most important innovations PLATO will be remembered for is games. Anyone that has played an educational game will note that school lessons and games aren’t always all that different. Since Rick Blomme had ported Spacewar! to PLATO in 1969 and added a two-player option, multi-player games had been on the rise. They made leader boards for games like Dogfight so players could get early forms of game rankings. Games like airtight and airace and Galactic Attack would follow those.
MUDs were another form of games that came to PLATO. Collosal Cave Adventure had come in 1975 for the PDP, so again these things were happening in a vacuum but where there were influences and where innovations were deterministic and found in isolation is hard to say. But the crawlers exploded on PLATO. We got Moria, Oubliette by Jim Schwaiger, Pedit5, crypt, dungeon, avatar, and drygulch. We saw the rise of intense storytelling, different game mechanics that were mostly inspired by Dungeons and Dragons, As PLATO terminals found their way in high schools and other universities, the amount of games and amount of time spent on those games exploded, with estimates of 20% of time on PLATO being spent playing games.
PLATO IV would grow to support thousands of terminals around the world in the 1970s. It was a utility. Schools (and even some parents) leased lines back to Champagne Urbana and many in computing thought that these timesharing systems would become the basis for a utility model in computing, similar to the cloud model we have today. But we had to go into the era of the microcomputer to boomerang back to timesharing first.
That microcomputer revolution would catch many, who didn’t see the correlation between Moore’s Law and the growing number of factories and standardization that would lead to microcomputers, off guard. Control Data had bet big on the mainframe market - and PLATO. CDC would sell mainframes to other schools to host their own PLATO instance. This is where it went from a timesharing system to a network of computers that did timesharing. Like a star topology.
Control Data looked to PLATO as one form of what the future of the company would be. Here, he saw this mainframe with thousands of connections as a way to lease time on the computers. CDC took PLATO to market as CDC Plato. Here, schools and companies alike could benefit from distance education. And for awhile it seemed to be working. Financial companies and airlines bought systems and the commercialization was on the rise, with over a hundred PLATO systems in use as we made our way to the middle of the 1980s. Even government agencies like the Depart of Defense used them for training. But this just happened to coincide with the advent of the microcomputer.
CDC made their own terminals that were often built with the same components that would be found in microcomputers but failed to capitalize on that market. Corporations didn’t embrace the collaboration features and often had these turned off. Social computing would move to bulletin boards And CDC would release versions of PLATO as micro-PLATO for the TRS-80, Texas Instruments TI-99, and even Atari computers. But the bureaucracy at CDC had slowed things down to the point that they couldn’t capitalize on the rapidly evolving PC industry. And prices were too high in a time when home computers were just moving from a hobbyist market to the mainstream.
The University of Illinois spun PLATO out into its own organization called University Communications, Inc (or UCI for short) and closed CERL in 1994. That was the same year Marc Andreessen co-founded Mosaic Communications Corporation, makers of Netscape -successor to NCSA Mosaic. Because NCSA, or The National Center for Supercomputing Applications, had also benefited from National Science Foundation grants when it was started in 1982. And all those students who flocked to the University of Illinois because of programs like PLATO had brought with them more expertise.
UCI continued PLATO as NovaNet, which was acquired by National Computer Systems and then Pearson corporation, finally getting shut down in 2015 - 55 years after those original days on ILLIAC. It evolved from the vacuum tube-driven mainframe in a research institute with one terminal to two terminals, to a transistorized mainframe with hundreds and then over a thousand terminals connected from research and educational institutions around the world. It represented new ideas in programming and programming languages and inspired generations of innovations.
That aftermath includes:
So PLATO gave us multi-player games, new programming languages, instant messaging, online and multiple choice testing, collaboration forums, message boards, multiple person chat rooms, early rudimentary remote screen sharing, their own brand of plasma display and all the research behind printing circuits on glass for that, and early research into touch sensitive displays. And as we’ve shown in just a few of the many people that contributed to computing after, they helped inspire an early generation of programmers and innovators.
If you like this episode I strongly suggest checking out The Friendly Orange Glow from Brian Dear. It’s a lovely work with just the right mix of dry history and flourishes of prose. A short history like this can’t hold a candle to a detailed anthology like Dear’s book.
Another well researched telling of the story can be found in a couple of chapters of A People’s History Of Computing In The United States, from Joy Rankin. She does a great job drawing a parallel (and sometimes direct line from) the Dartmouth Time Sharing System and others as early networks. And yes, terminals dialing into a mainframe and using resources over telephone and leased lines was certainly a form of bridging infrastructures and seemed like a network at the time. But no mainframe could have scaled to the ability to become a utility in the sense that all of humanity could access what was hosted on it.
Instead, the ARPANET was put online and growing from 1969 to 1990 and working out the hard scientific and engineering principals behind networking protocols gave us TCP/IP. In her book, Rankin makes great points about the BASIC and TUTOR applications helping shape more of our modern world in how they inspired the future of how we used personal devices once connected to a network. The scientists behind ARPANET, then NSFnet and the Internet, did the work to connect us. You see, those dial-up connections were expensive over long distances. By 1974 there were 47 computers connected to the ARPANET and by 1983 we had TCP/IPv4.And much like Bitzer allowing games, they didn’t seem to care too much how people would use the technology but wanted to build the foundation - a playground for whatever people wanted to build on top of it.
So the administrative and programming team at CERL deserve a lot of credit. The people who wrote the system, the generations who built features and code only to see it become obsolete came and went - but the compounding impact of their contributions can be felt across the technology landscape today. Some of that is people rediscovering work done at CERL, some is directly inspired, and some has been lost only to probably be rediscovered in the future. One thing is for certain, their contributions to e-learning are unparalleled with any other system out there. And their technical contributions, both in the form of those patented and those that were either unpatentable or where they didn’t think of patenting, are immense.
Bitzer and the first high schoolers and then graduate students across the world helped to shape the digital world we live in today. More from an almost sociological aspect than technical. And the deep thought applied to the system lives on today in so many aspects of our modern world. Sometimes that’s a straight line and others it’s dotted or curved. Looking around, most universities have licensing offices now, to capitalize on the research done. Check out a university near you and see what they have available for license. You might be surprised. As I’m sure many in Champagne were after all those years. Just because CDC couldn’t capitalize on some great research doesn’t mean we can’t.
Atari at the Science Fair: Scott Ryder: Atari-Controlled Robot
Here's an article from The Fresno Bee (Fresno, California) dated April 15, 1982: "Science proves Fair game to young minds".
"Joseph Paul Ogas, 17, has designed a cheaper way to manipulate material beneath a microscope. Garey Nishimura, 13, has evaluated the relative flammability of several household fabrics. Theirs were the big winners among the 693 projects that filled the Fresno Convention Center Exhibit Hall for this year’s California Central Valley Science and Engineering Fair.
"There were other interesting projects that didn’t win big [such as]
'The Effects of Birth Control Pills on Plants,' and 'Determining the Correlation Between Canine Howling, Cockroach Activity and Earthquake Prediction'."
And later -- in the article's final paragraph, the reason for this interview: "Runners up [included] Scott Ryder, a sixth-grader at Ayer Elementary School: "Can an Atari 800 Control a Robot With Software?"
Can an Atari 800 control a robot with software? And if so, why did an awesome Atari-controlled robot only earn a runner-up award at the Science and Engineering Fair? I talked with Scott to find out.
This interview took place on February 21, 2021.
ANTIC Episode 75 - Video Wars
In this episode of ANTIC The Atari 8-bit Computer Podcast… we discuss the merits of Sophia vs. VBXE for video upgrades, kick off the BASIC 10-liners contest, discuss some new games, and talk about numerous hardware upgrades that are coming.
Interview index: here
What We’ve Been Up To
YouTube videos this month
New at Archive.org
We’ve covered Radioshack but there are a few other retail stores I’d like to cover as well. CompUSA, CircuitCity, and Fry’s to name a few. Not only is there something to be learned from the move from brick and mortar electronic chains to Ecommerce but there’s plenty to be learned about how to treat people and how people perceived computers and what we need and when, as well.
You see, Fry’s was one of the few places you could walk in, pick a CPU, find a compatible mother board, pick a sweet chassis to put it in, get a power supply, a video card, some memory, back then probably a network card, maybe some sweet fans, a cooling system for the CPU you were about to overclock, an SSD drive to boot a machine, a hard drive to store stuff, a DVD, a floppy just in case, pick up some velcro wrap to keep the cables at bay, get a TV, a cheap knockoff smart watch, a VR headset that would never work, maybe a safe since you already have a cart, a soundbar ‘cause you did just get a TV, some headphones for when you’ll keep everyone else up with the sounder, a couple of resistors for that other project, a fixed frequency video card for that one SGI in the basement, a couple smart plugs, a solar backpack, and a CCNA book that you realize is actually 2 versions out of date when you go to take the test. Yup, that was a great trip. And ya’ there’s also a big bag of chips and a 32 ounce of some weird soda gonna’ go in the front seat with me. Sweet. Now let’s just toss the cheap flashlight we just bought into the glove box in case we ever break down and we’re good to go home and figure out how to pay for all this junk on that new Fry’s Credit Card we just opened.
But that was then and this is now. Fry’s announced it was closing all of its stores on February 24th, 2021. The week we’re recording this episode. To quote the final their website:
“After nearly 36 years in business as the one-stop-shop and online resource for high-tech professionals across nine states and 31 stores, Fry’s Electronics, Inc. (“Fry’s” or “Company”), has made the difficult decision to shut down its operations and close its business permanently as a result of changes in the retail industry and the challenges posed by the Covid-19 pandemic. The Company will implement the shut down through an orderly wind down process that it believes will be in the best interests of the Company, its creditors, and other stakeholders.
The Company ceased regular operations and began the wind-down process on February 24, 2021. It is hoped that undertaking the wind-down through this orderly process will reduce costs, avoid additional liabilities, minimize the impact on our customers, vendors, landlords and associates, and maximize the value of the Company’s assets for its creditors and other stakeholders.”
Wow. Just wow. I used to live a couple of miles from a Fry’s and it was a major part of furthering my understanding of arcane, bizarre, sometimes emergent, and definitely dingy areas of computing. And if those adjectives don’t seem to have been included lovingly, they most certainly are. You see every trip to Fry’s was strange.
Donald Fry founded Fry’s Food and Drug in 1954. The store rose to prominence in the 50s and 60s until his brother Charles Fry sold it off in 1972. As a part of Kroger it still exists today, with 22,000 employees. But this isn’t the story of a supermarket chain. I guess I did initially think the two were linked because the logos look somewhat similar - but that’s where their connection ends.
Instead, let’s cover what happened to the $14 million the family got from the sale of the chain. Charles Fry gave some to his sons John, Randy, and David. They added Kathryn Kolder and leased a location in Sunnyvale, California to open the first Fry’s Electronics store in 1985.
This was during the rise of the microcomputer. The computing industry had all these new players who were selling boards and printers and floppy drives. They put all this stuff in bins kinda’ like you would in a grocery store and became a one-stop shop for the hobbyist and the professional alike. Unlike groceries, the parts didn’t expire so they were able to still have things selling 5 or 10 years later, albeit a bit dusty.
1985 was the era when many bought integrated circuits, mother boards, and soldering irons and built their own computers. They saw the rise of the microprocessor, the 80286 and x86s. And as we moved into an era of predominantly x86 clones of the IBM PC, the buses and cards became standard. Provided a power supply had a molex connector it was probably good to light up most mother boards and hard drives. The IDE became the standard then later SATA. But parts were pretty interchangeable.
Knowing groceries, they also sold those. Get some Oranges and a microprocessor. They stopped selling those but always sold snacks until the day they closed down. But services were always a thing at Fry’s. Those who didn’t want to spend hours putting spacers on a motherboard and puttin
They also sold other electronics. Sometimes the selection seemed totally random. I bought my first MP3 player at a Fry’s - the Diamond Rio. And funny LED lights for computer fans before that really became a thing. Screwdriver kits, thermal grease, RAM chips, unsoldered boards, weird little toys, train sets, coloring books, certification books for that MCSE test I took in 2002, and whatever else I could think of.
The stores were kitchy. Some had walls painted like circuit boards. Some had alien motifs. Others were decorated like the old west. It’s like whatever they could find weird stuff to adorn the joint. People were increasingly going online. In 1997 they bought Frys.com. To help people get online, they started selling Internet access in 2000. But by then there were so many vendors to help people get online that it wasn’t going to be successful. People were increasingly shopping online so they bought Cyberian Outpost in 2001 and moved it to outpost.com - which later just pointed to Frys.com.
The closing of a number of Radio Shack stores and Circuit City and CompUSA seemed to give them a shot in the arm for a bit. But you could buy computers at Gateway Country or through Dell. Building your own computer was becoming more and more a niche industry for gamers and others who needed specific builds.
They grew to 34 stores at their height. Northern California stores in Campbell, Concord, Fremont, Roseville, Sacramento, San Jose, and that original Sunnyvale (now across the street from the old original Sunnyvale) and Southern California stores in Burbank, City of Industry, Fountain Valley, Manhattan Beach, Oxnard, San Diego, San Marcos, and the little one in Woodland Hills - it seemed like everyone in California knew to go to Fry’s when you needed some doodad. In fact, they made the documentary about General Magic because they were constantly going back and forth to Fry’s to get parts to build their device.
But they did expand out of California with 8 stores in Texas, two in Airizona, one in Illinois, one in Indiana, one in Nevada, one in Oregon, and another in Washington. In some ways it looked as though they were about to have a chain that could rival the supermarket chain their dad helped build. But it wasn’t meant to be.
With the fall of Radio Shack, CompUSA, and Circuit City, I was always surprised Fry’s stayed around. Tandy started a concept similar called Incredible Universe but that didn’t last too long. But I loved them. The customer service wasn’t great. The stores were always a little dirty. But I never left empty-handed. Even when I didn’t find what I was looking for.
Generations of computer enthusiasts bought everything from scanners to printers at Frys. They were sued over how they advertised, for sexual harassment, during divorce settlements, and over how they labeled equipment. They lost money in embezzlements, and as people increasingly turned to Amazon and other online vendors for the best price for that MSI motherboard or a screen for the iPhone - keeping such a massive inventory was putting them out of business. So in 2019 amidst rumors they were about to go out of business, they moved to stocking the stores via consignment. Not all vendors upstream could do that, leading to an increasingly strange selection and finding what you needed less and less.
Then came COVID. They closed a few stores and between the last ditch effort of consignment and empty bins as hardware moved, they just couldn’t do it any more. As with the flashier and less selection but more complete systems Circuit City and CompUSA before them, they finally closed their doors in 2021, after 36 years. And so we live in an era where many computers, tablets, and phones are no longer serviceable or have parts that can be swapped out. We live in an era where when we can service a device with those parts, we often go online to source them. And we live in an era where if we need instant gratification to replace components there are plenty of retail chains like Target or Walmart that sell components and move far more than Fry’s so are more competitive on the price. We live in an era where we don’t need to go into a retailer for software and books, both sold at high margins. There are stores on the Apple and Microsoft and Google platforms for that. And of course 2020 was a year that many retail chains had to close their doors in order to keep their employees safe, losing millions in revenue.
All of that eventually became too much for other computer stores as each slowly eroded the business. And now it’s become too much for Fry’s. I will always remember the countless hours I strolled around the dingy store, palming this adapter and that cable and trying to figure out what components might fit together so I can get the equivalent of an AlienWare computer for half the cost. And I’ll even fondly remember the usually sub-par customer service, because it forced me to learn more. And I’ll always be thankful that they had crap sitting around for a decade because I always learned something new about the history of computers in their bins of arcane bits and bytes sitting around.
And their closing reminds us, as the closings of former competitors and even other stores like Borders does, that an incredible opportunity lies ahead of us. These shifts in society also shift the supply chain. They used to get a 50% markup on software and a hefty markup on the books I wrote. Now I can publish software on the App Stores and pay less of my royalties to the retailers. Now I don’t need a box and manual for software. Now books don’t have to be printed and can even be self-published in those venues if I see fit to do so. And while Microsoft, Apple, and Google’s “Services” revenue or revenue from Target once belonged to stores like Fry’s, the opportunities have moved to linking and aggregating and adding machine learning and looking to fields that haven’t yet been brought into a more digital age - or even to harkening back to simpler times and providing a more small town white glove approach to life. Just as the dot com crash created a field where companies like Netflix and Google could become early unicorns, so every other rise and fall creates new, uncharted green fields and blue oceans. Thank you for your contributions - both past and future.
The Intel 8086 may be the most important processor ever made. It's descendants are central to modern computing, while retaining an absurd level of backwards compatibility. For such an important chip it had an unexpected beginning. The 8086 was meant as a stopgap measure while Intel worked on bigger and better projects. This episode we are looking at how Intel was trying to modernize, how the 8086 fit into that larger plan, and it's pre-IBM life.
James and John discuss eBay Finds: MacEffects Mac Portable 8MB card, computer lot, and PowerBook 550c. John talks about a SCSI hard drive replacement, and news includes Action Retro Libretto Hackintosh, System 7 Today return, Lisa X Fusion X 360 Project, Floppy Emu update, and recreating a Mac SE logic board.
Steve Jobs left Apple in 1985. He co-founded NeXT Computers and took Pixar public. He then returned to Apple as the interim CEO in 1997 at a salary of $1 per year. Some of the early accomplishments on his watch were started before he got there. But turning the company back around was squarely on him and his team.
By the end of 1997, Apple moved to a build-to-order manufacturing powered by an online store built on WebObjects, the NeXT application server. They killed off a number of models, simplifying the lineup of products and also killed the clone deals, ending licensing of the operating system to other vendors who were at times building sub-par products.
And they were busy. You could feel the frenetic pace. They were busy at work weaving the raw components from NeXT into an operating system that would be called Mac OS X. They announced a partnership that would see Microsoft invest $150 million into Apple to settle patent disputes but that Microsoft would get Internet Explorer bundled on the Mac and give a commitment to release Office for the Mac again. By then, Apple had $1.2 billion in cash reserves again, but armed with a streamlined company that was ready to move forward - but 1998 was a bottoming out of sorts, with Apple only doing just shy of $6 billion in revenue. To move forward, they took a little lesson from the past and released a new all-in-one computer. One that put the color back into that Apple logo. Or rather removed all the colors but Aqua blue from it.
The return of Steve Jobs invigorated many, such as Johnny Ive who is reported to have had a resignation in his back pocket when he met Jobs. Their collaboration led to a number of innovations, with a furious pace starting with the iMac. The first iMacs were shaped like gumdrops and the color of candy as well. The original Bondi blue had commercials showing all the cords in a typical PC setup and then the new iMac, “as unPC as you can get.” The iMac was supposed to be to get on the Internet. But the ensuing upgrades allowed for far more than that.
The iMac put style back into Apple and even computers. Subsequent releases came in candy colors like Lime, Strawberry, Blueberry, Grape, Tangerine, and later on Blue Dalmatian and Flower Power. The G3 chipset bled out into other more professional products like a blue and white G3 tower, which featured a slightly faster processor than the beige tower G3, but a much cooler look - and very easy to get into compared to any other machine on the market at the time. And the Clamshell laptops used the same design language. Playful, colorful, but mostly as fast as their traditional PowerBook counterparts.
But the team had their eye on a new strategy entirely. Yes, people wanted to get online - but these computers could do so much more. Apple wanted to make the Mac the Digital Hub for content. This centered around a technology that had been codeveloped from Apple, Sony, Panasonic, and others called IEEE 1394. But that was kinda’ boring so we just called it Firewire.
Begun in 1986 and originally started by Apple, Firewire had become a port that was on most digital cameras at the time. USB wasn’t fast enough to load and unload a lot of newer content like audio and video from cameras to computers. But I can clearly remember that by the year 1999 we were all living as Jobs put it in a “new emerging digital lifestyle.” This led to a number of releases from Apple. One was iMovie. Apple included it with the new iMac DV model for free. That model dumped the fan (which Jobs never liked even going back to the early days of Apple) as well as FireWire and the ability to add an AirPort card. Oh, and they released an AirPort base station in 1999 to help people get online easily. It is still one of the simplest router and wi-fi devices I’ve ever used. And was sleek with the new Graphite design language that would take Apple through for years on their professional devices.
iMovie was a single place to load all those digital videos and turn them into something else. And there was another format on the rise, MP3. Most everyone I’ve ever known at Apple love music. It’s in the DNA of the company, going back to Wozniak and Jobs and their love of musicians like Bob Dylan in the 1970s. The rise of the transistor radio and then the cassette and Walkman had opened our eyes to the democratization of what we could listen to as humans. But the MP3 format, which had been around since 1993, was on the rise. People were ripping and trading songs and Apple looked at a tool called Audion and another called SoundJam and decided that rather than Sherlock (or build that into the OS) that they would buy SoundJam in 2000. The new software, which they called iTunes, allowed users to rip and burn CDs easily. Apple then added iPhoto, iWeb, and iDVD. For photos, creating web sites, and making DVDs respectively. The digital hub was coming together.
But there was another very important part of that whole digital hub strategy. Now that we had music on our computers we needed something more portable to listen to that music on. There were MP3 players like the Diamond Rio out there, and there had been going back to the waning days of the Digital Equipment Research Lab - but they were either clunky or had poor design or just crappy and cheap. And mostly only held an album or two. I remember walking down that isle at Fry’s about once every other month waiting and hoping. But nothing good ever came.
That is, until Jobs and the Apple hardware engineering lead Job Rubinstein found Tony Fadell. He had been at General Magic, you know, the company that ushered in mobility as an industry. And he’d built Windows CE mobile devices for Philips in the Velo and Nino. But when we got him working with Jobs, Rubinstein, and Johnny Ive on the industrial design front, we got one of the most iconic devices ever made: the iPod.
And the iPod wasn’t all that different on the inside from a Newton. Blasphemy I know. It sported a pair of ARM chips and Ive harkened back to simpler times when he based the design on a transistor radio. Attention to detail and the lack thereof in the Sony Diskman propelled Apple to sell more than 400 million iPods to this day. By the time the iPod was released in 2001, Apple revenues had jumped to just shy of $8 billion but dropped back down to $5.3. But everything was about to change. And part of that was that the iPod design language was about to leak out to the rest of the products with white iBooks, white Mac Minis, and other white devices as a design language of sorts.
To sell all those iDevices, Apple embarked on a strategy that seemed crazy at the time. They opened retail stores. They hired Ron Johnson and opened two stores in 2001. They would grow to over 500 stores, and hit a billion in sales within three years. Johnson had been the VP of merchandising at Target and with the teams at Apple came up with the idea of taking payment without cash registers (after all you have an internet connected device you want to sell people) and the Genius Bar.
And generations of devices came that led people back into the stores. The G4 came along - as did faster RAM. And while Apple was updating the classic Mac operating system, they were also hard at work preparing NeXT to go across the full line of computers. They had been working the bugs out in Rhapsody and then Mac OS X Server, but the client OS, Codenamed Kodiak, went into beta in 2000 and then was released as a dual-boot option in Cheetah, in 2001. And thus began a long line of big cats, going to Puma then Jaguar in 2002, Panther in 2003, Tiger in 2005, Leopard in 2007, Snow Leopard in 2009, Lion in 2011, Mountain Lion in 2012 before moving to the new naming scheme that uses famous places in California.
Mac OS X finally provided a ground-up, modern, object-oriented operating system. They built the Aqua interface on top of it. Beautiful, modern, sleek. Even the backgrounds! The iMac would go from a gumdrop to a sleek flat panel on a metal stand, like a sunflower. Jobs and Ive are both named on the patents for this as well as many of the other inventions that came along in support of the rapid device rollouts of the day.
Jaguar, or 10.2, would turn out to be a big update. They added Address Book, iChat - now called Messages, and after nearly two decades replaced the 8-bit Happy Mac with a grey Apple logo in 2002. Yet another sign they were no longer just a computer company. Some of these needed a server and storage so Apple released the Xserve in 2002 and the Xserve RAID in 2003. The pro devices also started to transition from the grey graphite look to brushed metal, which we still use today.
Many wanted to step beyond just listening to music. There were expensive tools for creating music, like ProTools. And don’t get me wrong, you get what you pay for. It’s awesome. But democratizing the creation of media meant Apple wanted a piece of software to create digital audio - and released Garage Band in 2004. For this they again turned to an acquisition, EMagic, which had a tool called Logic Audio. I still use Logic to cut my podcasts. But with Garage Band they stripped it down to the essentials and released a tool that proved wildly popular, providing an on-ramp for many into the audio engineering space.
Not every project worked out. Apple had ups and downs in revenue and sales in the early part of the millennium. The G4 Cube was released in 2000 and while it is hailed as one of the greatest designs by industrial designers it was discontinued in 2001 due to low sales. But Steve Jobs had been hard at work on something new. Those iPods that were becoming the cash cow at Apple and changing the world, turning people into white earbud-clad zombies spinning those click wheels were about to get an easier way to put media into iTunes and so on the device.
The iTunes Store was released in 2003. Here, Jobs parlayed the success at Apple along with his own brand to twist the arms of executives from the big 5 record labels to finally allow digital music to be sold online. Each song was a dollar. Suddenly it was cheap enough that the music trading apps just couldn’t keep up. Today it seems like everyone just pays a streaming subscription but for a time, it gave a shot in the arm to music companies and gave us all this new-found expectation that we would always be able to have music that we wanted to hear on-demand.
Apple revenue was back up to $8.25 billion in 2004. But Apple was just getting started. The next seven years would see that revenue climb from to $13.9 billion in 2005, $19.3 in 2006, $24 billion in 2007, $32.4 in 2008, $42.9 in 2009, $65.2 in 2010, and a staggering $108.2 in 2011.
After working with the PowerPC chipset, Apple transitioned new computers to Intel chips in 2005 and 2006. Keep in mind that most people used desktops at the time and just wanted fast. And it was the era where the Mac was really open source friendly so having the ability to load in the best the Linux and Unix worlds had to offer for software inside projects or on servers was made all the easier. But Intel could produce chips faster and were moving faster. That Intel transition also helped with what we call the “App Gap” where applications written for Windows could be virtualized for the Mac. This helped the Mac get much more adoption in businesses.
Again, the pace was frenetic. People had been almost begging Apple to release a phone for years. The Windows Mobile devices, the Blackberry, the flip phones, even the Palm Treo. They were all crap in Jobs’ mind. Even the Rockr that had iTunes in it was crap. So Apple released the iPhone in 2007 in a now-iconic Jobs presentation. The early version didn’t have apps, but it was instantly one of the more saught-after gadgets. And in an era where people paid $100 to $200 for phones it changed the way we thought of the devices. In fact, the push notifications and app culture and always on fulfilled the General Magic dream that the Newton never could and truly moved us all into an always-on i (or Internet) culture.
The Apple TV was also released in 2007. I can still remember people talking about Apple releasing a television at the time. The same way they talk about Apple releasing a car. It wasn’t a television though, it was a small whitish box that resembled a Mac Mini - just with a different media-browsing type of Finder. Now it’s effectively an app to bootstrap the media apps on a Mac.
It had been a blistering 10 years. We didn’t even get into Pages, FaceTime, They weren’t done just yet. The iPad was released in 2010. By then, Apple revenues exceeded those of Microsoft. The return and the comeback was truly complete.
Similar technology used to build the Apple online store was also used to develop the iTunes Store and then the App Store in 2008. Here, rather than go to a site you might not trust and download an installer file with crazy levels of permissions.
One place where it’s still a work in progress to this day was iTools, released in 2000 and rebranded to .Mac or dot Mac in 2008, and now called MobileMe. Apple’s vision to sync all of our data between our myriad of devices wirelessly was a work in progress and never met the lofty goals set out. Some services, like Find My iPhone, work great. Others notsomuch. Jobs famously fired the team lead at one point. And while it’s better than it was it’s still not where it needs to be.
Steve Jobs passed away in 2011 at 56 years old. His first act at Apple changed the world, ushering in first the personal computing revolution and then the graphical interface revolution. He left an Apple that meant something. He returned to a demoralized Apple and brought digital media, portable music players, the iPhone, the iPad, the Apple TV, the iMac, the online music store, the online App Store, and so much more. The world had changed in that time, so he left, well, one more thing. You see, when they started, privacy and security wasn’t much of a thing. Keep in mind, computers didn’t have hard drives. The early days of the Internet after his return was a fairly save I or Internet world. But by the time he passed away there there were some troubling trends. The data on our phones and computers could weave together nearly every bit of our life to an outsider. Not only could this lead to identity theft but with the growing advertising networks and machine learning capabilities, the consequences of privacy breaches on Apple products could be profound as a society. He left an ethos behind to build great products but not at the expense of those who buy them. One his successor Tim Cook has maintained.
On the outside it may seem like the daunting 10 plus years of product releases has slowed. We still have the Macbook, the iMac, a tower, a mini, an iPhone, an iPad, an Apple TV. We now have HomeKit, a HomePod, new models of all those devices, Apple silicon, and some new headphones - but more importantly we’ve had to retreat a bit internally and direct some of those product development cycles to privacy, protecting users, shoring up the security model. Managing a vast portfolio of products in the largest company in the world means doing those things isn’t always altruistic. Big companies can mean big law suits when things go wrong. These will come up as we cover the history of the individual devices in greater detail.
The history of computing is full of stories of great innovators. Very few took a second act. Few, if any, had as impactful a first act as either that Steve Jobs had. It wasn’t just him in any of these. There are countless people from software developers to support representatives to product marketing gurus to the people that write the documentation. It was all of them, working with inspiring leadership and world class products that helped as much as any other organization in the history of computing, to shape the digital world we live in today.
Strap yourselves in for coverage of 11 magazine issues including a tiny bit on the large magazine Byte -- there’s an article on Star Raiders strategies that I can’t pass up. Game review is of Deluxe Invaders, which is a high quality Space Invaders clone. Oh, and there’s a discussion of a game called Hazard Run and its ties to a bad TV show and a worse flag.
QWERTY. It’s a funny word. Or not a word. But also not an acronym per se. Those are the top six letters in a modern keyboard. Why? Because the frequency they’re used allows for hammers on a traditional typewriter to travel to and fro and the effort allows us to be more efficient with our time while typing. The concept of the keyboard goes back almost as far back as moveable type - but it took hundreds of years to standardize where we are today.
Johannes Gutenberg is credited for developing the printing press in the 1450s. Printing using wooden blocks was brought to the Western world from China, which led him to replace the wood or clay characters with metal, thus giving us what we now think of as Moveable Type. This meant we were now arranging blocks of characters to print words onto paper. From there it was only a matter of time that we would realize that pressing a key could stamp a character onto paper as we went rather than developing a full page and then pressing ink to paper.
The first to get credit for pressing letters onto paper using a machine was Venetian Francesco Rampazzetto in 1575. But as with many innovations, this one needed to bounce around in the heads of inventors until the appropriate level of miniaturization and precision was ready. Henry Mill filed an English patent in 1714 for a machine that could type (or impress) letters progressively. By then, printed books were ubiquitous but we weren’t generating pages of printed text on the fly just yet.
Others would develop similar devices but from 1801 to 1810, Pellegrino Turri in Italy developed carbon paper. Here, he coated one side of paper with carbon and the other side with wax. Why did he invent that, other than to give us an excuse to say carbon copy later (and thus the cc in an email)?
Either he or Agostino Fantoni da Fivizzano invented a mechanical machine for pressing characters to paper for Countess Carolina Fantoni da Fivizzano, a blind friend of his. She would go on to send him letters written on the device, some of which exist to this day. More inventors tinkered with the idea of mechanical writing devices, often working in isolation from one another.
One was a surveyor, William Austin Burt. He found the handwritten documents of his field laborious and so gave us the typographer in 1829. Each letter was moved to where needed to print manually so it wasn’t all that much faster than the handwritten document, but the name would be hyphenated later to form type-writer. And with precision increasing and a lot of invention going on at the time there were other devices. But his patent was signed by Andrew Jackson.
James Pratt introduced his Pterotype in an article in the Scientific American in 1867. It was a device that more closely resembles the keyboard layout we know today, with 4 rows of keys and a split in the middle for hands. Others saw the article and continued their own innovative additions.
Frank Hall had worked on the telegraph before the Civil War and used his knowledge there to develop a Braille writer, which functioned similarly to a keyboard. He would move to Wisconsin, where he came in contact with another team developing a keyboard.
Christopher Latham Sholes saw the article in the Scientific American and along with Carlos Glidden and Samuel Soule out of Milwaukee developed the QWERTY keyboard we know of as the standard keyboard layout today from 1867 to 1868. Around the same time, Danish pastor Rasmus Malling-Hansen introduced the writing ball in 1870. It could also type letters onto paper but with a much more complicated keyboard layout. It was actually the first typewriter to go into mass production - but at this point new inventions were starting to follow the QWERTY layout. Because asdfjkl;. Both though were looking to increase typing speed with Malling-Mansen’s layout putting constanents on the right side and vowels on the left - but Sholes and Glidden mixed keys up to help reduce the strain on hardware as it recoiled, thus splitting common characters in words between the sides.
James Densmore encountered the Sholes work and jumped in to help. They had it relentlessly tested and iterated on the design, getting more and more productivity gains and making the device more and more hardy. When the others left the project, it was Densmore and Sholes carrying on. But Sholes was also a politician and editor of a newspaper, so had a lot going on. He sold his share of the patent for their layout for $12,000 and Densmore decided to go with royalties instead.
By the 1880s, the invention had been floating around long enough and given a standardized keyboard it was finally ready to be mass produced. This began with the Sholes & Glidden Type Writer introduced in America in 1874. That was followed by the Caligraph. But it was Remington that would take the Sholes patent and create the Remington Typewriter, removing the hyphen from the word typewriter and going mainstream - netting Densmore a million and a half bucks in 1800s money for his royalties. And if you’ve seen anything typed on it, you’ll note that it supported one font: the monospaced sans serif Grotesque style.
Characters had always been upper case. Remington added a shift key to give us the ability to do both upper and lower case in 1878 with the Remington Model 2. This was also where we got the ampersand, parenthesis, percent symbol, and question mark as shift characters for numbers. Remington also added tab and margins in 1897. Mark Twain was the first author to turn a manuscript in from a typewriter using what else but the Remington Typewriter. By then, we were experimenting with the sizes and spaces between characters, or kerning, to make typed content easier to read. Some companies moved to slab serif or Pica fonts and typefaces. You could really tell a lot about a company by that Olivetti with it’s modern, almost anti-Latin fonts.
The Remington Typewriter Company would later merge with the Rand Kardex company to form Remington Rand, making typewriters, guns, and then in 1950, acquiring the Eckert-Mauchly Computer Corporation, who made ENIAC - arguably the first all-digital computer. Rand also acquired Engineering Research Associates (or ERA) and introduced the Univac. Electronics maker Sperry acquired them in 1955, and then merged with Burroughs to form Unisys in 1988, still a thriving company. But what’s important is that they knew typewriters. And keyboards.
But electronics had been improving in the same era that Remington took their typewriters mainstream, and before. Samuel Morse developed the recording telegraph in 1835 and David Hughes added the printed telegraph. Emile Baudot gave us a 5 bit code in the 1870s that enhanced that but those were still using keys similar to what you find on a piano. The typewriter hadn’t yet merged with digital communications just yet. Thomas Edison patented the electric typewriter in 1872 but didn’t produce a working model. And this was a great time of innovation. For example, Alexander Graham Bell was hard at work on patenting the telephone at the time.
James Smathers then gave us the first electronic typewriter in 1920 and by the 1930s improved Baudot, or baud was combined with a QUERTY keyboard by Siemens and others to give us typing over the wire. The Teletype Corporation was founded in 1906 and would go from tape punch and readers to producing the teletypes that allowed users to dial into mainframes in the 1970s timesharing networks. But we’re getting ahead of ourselves. How did we eventually end up plugging a keyboard into a computer?
Herman Hollerith, the mind behind the original IBM punch cards for tabulating machines before his company got merged with others to form IBM, brought us text keypunches, which were later used to input data into early computers. The Binac computer used a similar representation with 8 keys and an electromechanical control was added to input data into the computer like a punch card might - for this think of a modern 10-key pad. Given that we had electronic typewriters for a couple of decades it was only a matter of time before a full keyboard worth of text was needed on a computer. That came in 1954 with the pioneering work done MIT. Here, Douglas Ross wanted to hookup a Flexowriter electric typewriter to a computer, which would be done the next year as yet another of the huge innovations coming out of the Whirlwind project at MIT. With the addition of core memory to computing that was the first time a real keyboard (and being able to write characters into a computer) was really useful. After nearly 400 years since the first attempts to build a moveable type machine and then and just shy of 100 years since the layout had been codified, the computer keyboard was born.
The PLATO team in late 60s University of Illinois Champaign Urbana were one of many research teams that sought to develop cheaper input output mechanisms for their computer Illiac and prior to moving to standard keyboards they built custom devices with fewer keys to help students select multiple choice answers. But eventually they used teletype-esque systems.
Those early keyboards were mechanical. They still made a heavy clanky sound when the keys were pressed. Not as much as when using a big mechanical typewriter, but not like the keyboards we use today. These used keys with springs inside them. Springs would be replaced with pressure pads in some machines, including the Sinclair ZX80 and ZX81. And the Timex Sinclair 1000. Given that there were less moving parts, they were cheap to make. They used conductive traces with a gate between two membranes. When a key was pressed electricity flowed through what amounted to a flip-flop. When the key was released the electricity stopped flowing. I never liked them because they just didn’t have that feel. In fact, they’re still used in devices like microwaves to provide for buttons under led lights that you can press.
By the late 1970s, keyboards were becoming more and more common. The next advancement was in Chiclet keyboards, common on the TRS-80 and the IBM PCjr. These were were like membrane keyboards but used moulded rubber. Scissor switch keyboards became the standard for laptops - these involve a couple of pieces of plastic under each key, arranged like a scissor. And more and more keyboards were produced.
With an explosion in the amount of time we spent on computers, we eventually got about as many designs of ergonomic keyboards as you can think of. Here, doctors or engineers or just random people would attempt to raise or lower hands or move hands apart or depress various keys or raise them. But as we moved from desktops to laptops or typing directly on screens as we do with tablets and phones, those sell less and less.
I wonder what Sholes would say if you showed him and the inventors he worked with what the QWERTY keyboard looks like on an iPhone today? I wonder how many people know that at least two of the steps in the story of the keyboard had to do with helping the blind communicate through the written word? I wonder how many know about the work Alexander Graham Bell did with the deaf and the impact that had on his understanding of the vibrations of sound and the emergence of phonautograph to record sound and how that would become acoustic telegraphy and then the telephone, which could later stream baud? Well, we’re out of time for today so that story will have to get tabled for a future episode.
In the meantime, look around for places where there’s no standard. Just like the keyboard layout took different inventors and iterations to find the right amount of productivity, any place where there’s not yet a standard just needs that same level of deep thinking and sometimes generations to get it perfected. But we still use the QWERTY layout today and so sometimes once we find the right mix, we’ve set in motion an innovative that can become a true game changer. And if it’s not ready, at least we’ve contributed to the evolutions that revolutionize the world. Even if we don’t use those inventions. Bell famously never had a phone installed in his office. Because distractions. Luckily I disabled notifications on my phone before recording this or it would never get out…
David Pogue reviews the PowerBook Duo 210/230 and the companion Duo Dock. NuBus and SCSI weren’t hot pluggable, meaning you had to shut down the machine every time you docked or undocked!
Original text from Macworld Magazine, March 1993.
Steve Jobs had an infamous split with the board of directors of Apple and left the company shortly after the release of the original Mac. He was an innovator who at 21 years old had started Apple in the garage with Steve Wozniak and at 30 years old while already plenty wealthy felt he still had more to give and do. We can say a lot of things about him but he was arguably one of the best product managers ever.
He told Apple he’d be taking some “low-level staffers” and ended up taking Rich Page, Bud Tribble, Dan'l Lewin, George Crow, and Susan Barnes to be the CFO. They also took Susan Kare and Joanna Hoffman. had their eyes on a computer that was specifically targeting higher education. They wanted to build computers for researchers and universities.
Companies like CDC and Data General had done well in Universities. The team knew there was a niche that could be carved out there. There were some gaps with the Mac that made it a hard sell in research environments. Computer scientists needed object-oriented programming and protected memory. Having seen the work at PARC on object-oriented languages, Jobs knew the power and future-proof approach.
Unix System V had branched a number of times and it was a bit more of a red ocean than I think they realized. But Jobs put up $7 million of his own money to found NeXT Computer. He’d add another $5 million and Ross Perot would add another $20 million. The pay bands were one of the most straight-forward of any startup ever founded. The senior staff made $75,000 and everyone else got $50,000. Simple.
Ironically, so soon after the 1984 Super Bowl ad where Jobs based IBM, they hired the man who designed the IBM logo, Paul Rand, to design a logo for NeXT. They paid him $100,000 flat. Imagine the phone call when Jobs called IBM to get them to release Rand from a conflict of interest in working with them.
They released the first computer in 1988. The NeXT Computer, as it was called, was expensive for the day, coming in at $6,500. It sported a Motorola 68030 CPU and clocked in at a whopping 25 MHz. And it came with a special operating system called NeXTSTEP.
NeXTSTEP was based on the Mach kernel with some of the source code coming from BSD. If we go back a little, Unix was started at Bell Labs in 1969 and by the late 70s had been forked from Unix System V to BSD, Unix version 7, and PWB - with each of those resulting in other forks that would eventually become OpenBSD, SunOS, NetBSD, Solaris, HP-UX, Linux, AIX, and countless others.
Mach was developed at Carnegie Mellon University and is one of the earliest microkernels. For Mach, Richard Rashid (who would later found Microsoft Research) and Avie Tevanian, were looking specifically to distributed computing. And the Mach project was kicked off in 1985, the same year Jobs left Apple.
Mach was backwards-compatible to BSD 4.2 and so could run a pretty wide variety of software. It allowed for threads, or units of execution and tasks or objects that enabled threads. It provided support for messages, which for object oriented languages are typed data objects that fall outside the scope of tasks and threads and then a protected message queue, to manage the messages between tasks and rights of access. They stood it up on a DEC VAX and released it publicly in 1987.
Here’s the thing, Unix licensing from Bell Labs was causing problems. So it was important to everyone that the license be open. And this would be important to NeXT as well. NeXT needed a next-generation operating system and so Avi Tevanian was recruited to join NeXT as the Vice President of Software Engineering. There, he designed NeXTSTEP with a handful of engineers.
The computers had custom boards and were fast. And they were a sleek black like nothing I’d seen before. But Bill Gates was not impressed claiming that “If you want black, I’ll get you a can of paint.” But some people loved the machines and especially some of the tools NeXT developed for programmers.
They got a factory to produce the machines and it only needed to crank out 100 a month as opposed to the thousands it was built to produce. In other words, the price tag was keeping universities from buying the machines. So they pivoted a little. They went up-market with the NeXTcube in 1990, which ran NeXTSTEP, OPENSTEP, or NetBSD and came with the Motorola 68040 CPU. This came machine in at $8,000 to almost $16,000. It came with a hard drive. For the lower end of the market they also released the NeXTstation in 1990, which shipped for just shy of $5,000.
The new models helped but by 1991 they had to lay off 5 percent of the company and another 280 by 1993. That’s when the hardware side got sold to Canon so NeXT could focus exclusively on NeXTSTEP. That is, until they got acquired by Apple in 1997.
By the end, they’d sold around 50,000 computers. Apple bought NeXT for $429 million and 1.5 million shares of Apple stock, trading at 22 cents at the time, which was trading at $17 a share so worth another $25 and a half million dollars. That makes the deal worth $454 million or $9,080 per machine NeXT had ever built. But it wasn’t about the computer business, which had already been spun down. It was about Jobs and getting a multi-tasking, object-oriented, powerhouse of an operating system, the grandparent of OS X - and the derivative macOS, iOS, iPadOS, watchOS, and tvOS forks.
The work done at NeXT has had a long-term impact on the computer industry as a whole. For one, the spinning pinwheel on a Mac. And the Dock. And the App Store. And Objective-C. But also Interface Builder as an IDE was revolutionary. Today we use Xcode. But many of the components go back all the way. And so much more.
After the acquisition, NeXT became Mac OS X Server in 1999 and by 2001 was Mac OS X. The rest there is history. But the legacy of the platform is considerable. Just on NeXTSTEP we had a few pretty massive successes.
Tim Berners-Lee developed the first web browser WorldWideWeb on NeXTSTEP for a NeXT . Other browsers for other platforms would come but his work became the web as we know it today. The machine he developed the web on is now on display at the National Museum of Science and Media in the UK.
We also got games like Quake, Heretic, Stife, and Doom from Interface Builder. And webobjects. And the people.
Tevanian came with NeXT to Apple as the Senior Vice President of Software Engineering. Jobs became an advisor, then CEO. Craig Federighi came with the acquisition as well - now Apple’s VP of software engineering. And I know dozens of others who came in from NeXT and helped reshape the culture at Apple.
Next.com still redirects to Apple.com. It took three years to ship that first computer at NeXT. It took 2 1/2 years to develop the iPhone. The Apple II, iPod, iPad, and first iMac were much less. Nearly 5 years for the original Mac. Some things take a little more time to flush out than others. Some need the price of components or new components to show up before you know it can be insanely great. Some need false starts like the Steve Jobs Steve Jobs famously said Apple wanted to create a computer in a book in 1983. That finally came out with the release of the iPad in 2010, 27 years later.
And so the final component of the Apple acquisition of NeXT to mention is Steve Jobs himself. He didn’t initially come in. He’d just become a billionaire off Pixar and was doing pretty darn well. His arrival back at Apple signified the end of a long draught for the company and all those products we mentioned and the iTunes music store and the App Store (both initially built on WebObjects) would change the way we consume content forever. His impact was substantial. For one, after factoring stock splits, the company might still be trading at .22 cents a share, which is what it would be today with all that. Instead they’re the most highly valued company in the world. But that pales in comparison to the way he and his teams and that relentless eye to product and design has actually changed the world. And the way his perspectives on privacy help protect us today, long after he passed.
The heroes journey (as described is a storytelling template that follows a hero from disgrace, to learn the mistakes of their past and reinvent themselves amidst a crisis throughout a grand adventure, and return home transformed. NeXT and Pixar represent part of that journey here. Which makes me wonder: what is my own Monomyth? Where will I return to? What is or was my abyss? These can be large or small. And while very few people in the world will have one like Steve Jobs did, we should all reflect on ours and learn from them. And yes that was plural because life is not so simple that there is one.
The past, and our understanding of it, predicts the future. Good luck on your journey.
James and John discuss eBay Finds: Mac Guided Tour audio cassette, Quadra 950, Apple umbrella. They look back at February 2001 in MacAddict magazine, and news includes mini vMac on your calculator, and a GameBoy as an Apple TV remote control.
I often think of companies in relation to their contribution to the next evolution in the forking and merging of disciplines in computing that brought us to where we are today. Many companies have multiple contributions. Few have as many such contributions as Apple. But there was a time when they didn’t seem so innovative.
This lost decade began about half way through the tenure of John Sculley and can be seen through the lens of the CEOs. There was Sculley, CEO from 1983 to 1993. Co-founders and spiritual centers of Apple, Steve Jobs and Steve Wozniak, left Apple in 1985. Jobs to create NeXT and Wozniak to jump into a variety of companies like making universal remotes, wireless GPS trackers, and and other adventures.
This meant Sculley was finally in a position to be fully in charge of Apple. His era would see sales 10x from $800 million to $8 billion. Operationally, he was one of the more adept at cash management, putting $2 billion in the bank by 1993. Suddenly the vision of Steve Jobs was paying off. That original Mac started to sell and grow markets. But during this time, first the IBM PC and then the clones, all powered by the Microsoft operating system, completely took the operating system market for personal computers. Apple had high margins yet struggled for relevance.
Under Sculley, Apple released HyperCard, funded a skunkworks team in General Magic, arguably the beginning of ubiquitous computing, and using many of those same ideas he backed the Newton, coining the term personal digital assistant. Under his leadership, Apple marketing sent 200,000 people home with a Mac to try it out. Put the device in the hands of the people is probably one of the more important lessons they still teach newcomers that work in Apple Stores.
Looking at the big financial picture it seems like Sculley did alright. But in Apple’s fourth-quarter earnings call in 1993, they announced a 97 drop from the same time in 1992. This was also when a serious technical debt problem began to manifest itself.
The Mac operating system grew from the system those early pioneers built in 1984 to Macintosh System Software going from version 1 to version 7. But after annual releases leading to version 6, it took 3 years to develop system 7 and the direction to take with the operating system caused a schism in Apple engineering around what would happen once 7 shipped. Seems like most companies go through almost the exact same schism. Microsoft quietly grew NT to resolve their issues with Windows 3 and 95 until it finally became the thing in 2000. IBM had invested heavily into that same code, basically, with Warp - but wanted something new.
Something happened while Apple was building macOS 7. They lost Jean Lois Gasseé who had been head of development since Steve Jobs left. When Sculley gave everyone a copy of his memoir, Gasseé provided a copy of The Mythical Man-Month, from Fred Brooks’ experience with the IBM System 360. It’s unclear today if anyone read it. To me this is really the first big sign of trouble. Gassée left to build another OS, BeOS.
By the time macOS 7 was released, it was clear that the operating system was bloated, needed a massive object-oriented overhaul, and under Sculley the teams were split, with one team eventually getting spun off into its own company and then became a part of IBM to help with their OS woes. The team at Apple took 6 years to release the next operating system. Meanwhile, one of Sculley’s most defining decisions was to avoid licensing the Macintosh operating system. Probably because it was just too big a mess to do so. And yet everyday users didn’t notice all that much and most loved it.
But third party developers left. And that was at one of the most critical times in the history of personal computers because Microsoft was gaining a lot of developers for Windows 3.1 and released the wildly popular Windows 95.
The Mac accounted for most of the revenue of the company, but under Sculley the company dumped a lot of R&D money into the Newton. As with other big projects, the device took too long to ship and when it did, the early PDA market was a red ocean with inexpensive competitors. The Palm Pilot effectively ended up owning that pen computing market.
Sculley was a solid executive. And he played the part of visionary from time to time. But under his tenure Apple found operating system problems, rumors about Windows 95, developers leaving Apple behind for the Windows ecosystem, and whether those technical issues are on his lieutenants or him, the buck stocks there. The Windows clone industry led to PC price wars that caused Apple revenues to plummet. And so Markkula was off to find a new CEO.
Michael Spindler became the CEO from 1993 to 1996. The failure of the Newton and Copland operating systems are placed at his feet, even though they began in the previous regime. Markkula hired Digital Equipment and Intel veteran Spindler to assist in European operations and he rose to President of Apple Europe and then ran all international. He would become the only CEO to have no new Mac operating systems released in his tenure. Missed deadlines abound with Copland and then Tempo, which would become Mac OS 8.
And those aren’t the only products that came out at the time. We also got the PowerCD, the Apple QuickTake digital camera, and the Apple Pippin. Bandai had begun trying to develop a video game system with a scaled down version of the Mac. The Apple Pippin realized Markkula’s idea from when the Mac was first conceived as an Apple video game system.
There were a few important things that happened under Spindler though. First, Apple moved to the PowerPC architecture. Second, he decided to license the Macintosh operating system to companies wanting to clone the Macintosh. And he had discussions with IBM, Sun, and Philips to acquire Apple. Dwindling reserves, increasing debt. Something had to change and within three years, Spindler was gone.
Gil Amelio was CEO from 1996 to 1997. He moved from the board while the CEO at National Semiconductor to CEO of Apple. He inherited a company short on cash and high on expenses. He quickly began pushing forward OS 8, cut a third of the staff, streamline operations, dumping some poor quality products, and releasing new products Apple needed to be competitive like the Apple Network Server.
He also tried to acquire BeOS for $200 million, which would have Brough Gassée back but instead acquired NeXT for $429 million. But despite the good trajectory he had the company on, the stock was still dropping, Apple continued to lose money, and an immovable force was back - now with another decade of experience launching two successful companies: NeXT and Pixar.
The end of the lost decade can be seen as the return of Steve Jobs. Apple didn’t have an operating system. They were in a lurch soy-to-speak. I’ve seen or read it portrayed that Steve Jobs intended to take control of Apple. And I’ve seen it portrayed that he was happy digging up carrots in the back yard but came back because he was inspired by Johnny Ive. But I remember the feel around Apple changed when he showed back up on campus. As with other companies that dug themselves out of a lost decade, there was a renewed purpose. There was inspiration.
By 1997, one of the heroes of the personal computing revolution, Steve Jobs, was back. But not quite… He became interim CEO in 1997 and immediately turned his eye to making Apple profitable again. Over the past decade, the product line expanded to include a dozen models of the Mac. Anyone who’s read Geoffrey Moore’s Crossing the Chasm, Inside the Tornado, and Zone To Win knows this story all too well. We grow, we release new products, and then we eventually need to take a look at the portfolio and make some hard cuts.
Apple released the Macintosh II in 1987 then the Macintosh Portable in 1989 then the Iicx and II ci in 89 along with the Apple IIgs, the last of that series. By facing competition in different markets, we saw the LC line come along in 1990 and the Quadra in 1991, the same year three models of the PowerBook were released. Different printers, scanners, CD-Roms had come along by then and in 1993, we got a Macintosh TV, the Apple Newton, more models of the LC and by 1994 even more of those plus the QuickTake, Workgroup Server, the Pippin and by 1995 there were a dozen Performas, half a dozen Power Macintosh 6400s, the Apple Network Server and yet another versions of the Performa 6200 and we added the eMade and beige G3 in 1997. The SKU list was a mess. Cleaning that up took time but helped prepare Apple for a simpler sales process. Today we have a good, better, best with each device, with many a computer being build-to-order.
Jobs restructured the board, ending the long tenure of Mike Markkula, who’d been so impactful at each stage of the company so far. One of the forces behind the rise of the Apple computer and the Macintosh was about to change the world again, this time as the CEO.
There was a nexus of Digital Research and Xerox PARC, along with Stanford and Berkeley in the Bay Area. The rise of the hobbyists and the success of Apple attracted some of the best minds in computing to Apple. This confluence was about to change the world. One of those brilliant minds that landed at Apple started out as a technical writer.
Apple hired Jef Raskin as their 31st employee, to write the Apple II manual. He quickly started harping on people to build a computer that was easy to use. Mike Markkula wanted to release a gaming console or a cheap computer that could compete with the Commodore and Atari machines at the time. He called the project “Annie.”
The project began with Raskin, but he had a very different idea than Markkula’s. He summed it up in an article called “Computers by the Millions” that wouldn’t see publication until 1982. His vision was closer to his PhD dissertation, bringing computing to the masses. For this, he envisioned a menu driven operating system that was easy to use and inexpensive. Not yet a GUI in the sense of a windowing operating system and so could run on chips that were rapidly dropping in price. He planned to use the 6809 chip for the machine and give it a five inch display.
He didn’t tell anyone that he had a PhD when he was hired, as the team at Apple was skeptical of academia. Jobs provided input, but was off working on the Lisa project, which used the 68000 chip. So they had free reign over what they were doing.
Raskin quickly added Joanna Hoffman for marketing. She was on leave from getting a PhD in archaeology at the University of Chicago and was the marketing team for the Mac for over a year. They also added Burrell Smith, employee #282 from the hardware technician team, to do hardware. He’d run with the Homebrew Computer Club crowd since 1975 and had just strolled into Apple one day and asked for a job.
Raskin also brought in one of his students from the University of California San Diego who was taking a break from working on his PhD in neurochemistry. Bill Atkinson became employee 51 at Apple and joined the project. They pulled in Andy Hertzfeld, who Steve Jobs hired when Apple bought one of his programs as he was wrapping up his degree at Berkeley and who’d been sitting on the Apple services team and doing Apple III demos.
They added Larry Kenyon, who’d worked at Amdahl and then on the Apple III team. Susan Kare came in to add art and design. They, along with Chris Espinosa - who’d been in the garage with Jobs and Wozniak working on the Apple I, ended up comprising the core team.
Over time, the team grew. Bud Tribble joined as the manager for software development. Jerrold Manock, who’d designed the case of the Apple II, came in to design the now-iconic Macintosh case. The team would eventually expand to include Bob Belleville, Steve Capps, George Crow, Donn Denman, Bruce Horn, and Caroline Rose as well. It was still a small team. And they needed a better code name. But chronologically let’s step back to the early project.
Raskin chose his favorite Apple, the Macintosh, as the codename for the project. As far as codenames go it was a pretty good one. So their mission would be to ship a machine that was easy to use, would appeal to the masses, and be at a price point the masses could afford. They were looking at 64k of memory, a Motorola 6809 chip, and a 256 bitmap display. Small, light, and inexpensive.
Jobs’ relationship with the Lisa team was strained and he was taken off of that and he started moving in on the Macintosh team. It was quickly the Steve Jobs show.
Having seen what could be done with the Motorola 68000 chip on the Lisa team, Jobs had them redesign the board to work with that. After visiting Xerox PARC at Raskin’s insistence, Jobs finally got the desktop metaphor and true graphical interface design.
Xerox had not been quiet about the work at PARC. Going back to 1972 there were even television commercials. And Raskin had done time at PARC while on sabbatical from Stanford. Information about Smalltalk had been published and people like Bill Atkinson were reading about it in college. People had been exposed to the mouse all around the Bay Area in the 60s and 70s or read Engelbart’s scholarly works on it. Many of the people that worked on these projects had doctorates and were academics. They shared their research as freely as love was shared during that counter-culture time. Just as it had passed from MIT to Dartmouth and then in the back of Bob Albrecht’s VW had spread around the country in the 60s. That spirit of innovation and the constant evolutions over the past 25 years found their way to Steve Jobs.
He saw the desktop metaphor and mouse and fell in love with it, knowing they could build one for less than the $400 unit Xerox had. He saw how an object-oriented programming language like Smalltalk made all that possible. The team was already on their way to the same types of things and so Jobs told the people at PARC about the Lisa project, but not yet about the Mac. In fact, he was as transparent as anyone could be. He made sure they knew how much he loved their work and disclosed more than I think the team planned on him disclosing about Apple.
This is the point where Larry Tesler and others realized that the group of rag-tag garage-building Homebrew hackers had actually built a company that had real computer scientists and was on track to changing the world. Tesler and some others would end up at Apple later - to see some of their innovations go to a mass market. Steve Jobs at this point totally bought into Raskin’s vision. Yet he still felt they needed to make compromises with the price and better hardware to make it all happen.
Raskin couldn’t make the kinds of compromises Jobs wanted. He also had an immunity to the now-infamous Steve Jobs reality distortion field and they clashed constantly. So eventually Raskin the project just when it was starting to take off. Raskin would go on to work with Canon to build his vision, which became the Canon CAT.
With Raskin gone, and armed with a dream team of mad scientists, they got to work, tirelessly pushing towards shipping a computer they all believed would change the world. Jobs brought in Fernandez to help with projects like the macOS and later HyperCard. Wozniak had a pretty big influence over Raskin in the early days of the Mac project and helped here and there withe the project, like with the bit-serial peripheral bus on the Mac.
Steve Jobs wanted an inexpensive mouse that could be manufactured en masse. Jim Yurchenco from Hovey-Kelley, later called Ideo, got the task - given that trusted engineers at Apple had full dance cards. He looked at the Xerox mouse and other devices around - including trackballs in Atari arcade machines. Those used optics instead of mechanical switches. As the ball under the mouse rolled beams of light would be interrupted and the cost of those components had come down faster than the technology in the Xerox mouse. He used a ball from a roll-on deodorant stick and got to work. The rest of the team designed the injection molded case for the mouse. That work began with the Lisa and by the time they were done, the price was low enough that every Mac could get one.
Armed with a mouse, they figured out how to move windows over the top of one another, Susan Kare designed iconography that is a bit less 8-bit but often every bit as true to form today. Learning how they wanted to access various components of the desktop, or find things, they developed the Finder. Atkinson gave us marching ants, the concept of double-clicking, the lasso for selecting content, the menu bar, MacPaint, and later, HyperCard.
It was a small team, working long hours. Driven by a Jobs for perfection. Jobs made the Lisa team the enemy. Everything not the Mac just sucked. He took the team to art exhibits. He had the team sign the inside of the case to infuse them with the pride of an artist. He killed the idea of long product specifications before writing code and they just jumped in, building and refining and rebuilding and rapid prototyping. The team responded well to the enthusiasm and need for perfectionism.
The Mac team was like a rebel squadron. They were like a start-up, operating inside Apple. They were pirates. They got fast and sometimes harsh feedback. And nearly all of them still look back on that time as the best thing they’ve done in their careers.
As IBM and many learned the hard way before them, they learned a small, inspired team, can get a lot done. With such a small team and the ability to parlay work done for the Lisa, the R&D costs were minuscule until they were ready to release the computer. And yet, one can’t change the world over night. 1981 turned into 1982 turned into 1983.
More and more people came in to fill gaps. Collette Askeland came in to design the printed circuit board. Mike Boich went to companies to get them to write software for the Macintosh. Berry Cash helped prepare sellers to move the product. Matt Carter got the factory ready to mass produce the machine. Donn Denman wrote MacBASIC (because every machine needed a BASIC back then). Martin Haeberli helped write MacTerminal and Memory Manager. Bill Bull got rid of the fan. Patti King helped manage the software library. Dan Kottke helped troubleshoot issues with mother boards. Brian Robertson helped with purchasing. Ed Riddle designed the keyboard. Linda Wilkin took on documentation for the engineering team. It was a growing team. Pamela Wyman and Angeline Lo came in as programmers. Hap Horn and Steve Balog as engineers.
Jobs had agreed to bring in adults to run the company. So they recruited 44 years old hotshot CEO John Sculley to change the world as their CEO rather than selling sugar water at Pepsi. Scully and Jobs had a tumultuous relationship over time. While Jobs had made tradeoffs on cost versus performance for the Mac, Sculley ended up raising the price for business reasons.
Regis McKenna came in to help with the market campaign. He would win over so much trust that he would later get called out of retirement to do damage control when Apple had an antenna problem on the iPhone. We’ll cover Antenna-gate at some point. They spearheaded the production of the now-iconic 1984 Super Bowl XVIII ad, which shows woman running from conformity and depicted IBM as the Big Brother from George Orwell’s book, 1984.
Two days after the ad, the Macintosh 128k shipped for $2,495. The price had jumped because Scully wanted enough money to fund a marketing campaign. It shipped late, and the 128k of memory was a bit underpowered, but it was a success. Many of the concepts such as a System and Finder, persist to this day. It came with MacWrite and MacPaint and some of the other Lisa products were soon to follow, now as MacProject and MacTerminal. But the first killer app for the Mac was Microsoft Word, which was the first version of Word ever shipped.
Every machine came with a mouse. The machines came with a cassette that featured a guided tour of the new computer. You could write programs in MacBASIC and my second language, MacPascal.
They hit the initial sales numbers despite the higher price. But over time that bit them on sluggish sales. Despite the early success, the sales were declining. Yet the team forged on. They introduced the Apple LaserWriter at a whopping $7,000. This was a laser printer that was based on the Canon 300 dpi engine. Burrell Smith designed a board and newcomer Adobe knew laser printers, given that the founders were Xerox alumni. They added postscript, which had initially been thought up while working with Ivan Sutherland and then implemented at PARC, to make for perfect printing at the time.
The sluggish sales caused internal issues. There’s a hangover when we do something great. First there were the famous episodes between Jobs, Scully, and the board of directors at Apple. Scully seems to have been portrayed by many to be either a villain or a court jester of sorts in the story of Steve Jobs. Across my research, which began with books and notes and expanded to include a number of interviews, I’ve found Scully to have been admirable in the face of what many might consider a petulant child. But they all knew a brilliant one.
But amidst Apple’s first quarterly loss, Scully and Jobs had a falling out. Jobs tried to lead an insurrection and ultimately resigned. Wozniak had left Apple already, pointing out that the Apple II was still 70% of the revenues of the company. But the Mac was clearly the future.
They had reached a turning point in the history of computers. The first mass marketed computer featuring a GUI and a mouse came and went. And so many others were in development that a red ocean was forming. Microsoft released Windows 1.0 in 1985. Acorn, Amiga, IBM, and others were in rapid development as well.
I can still remember the first time I sat down at a Mac. I’d used the Apple IIs in school and we got a lab of Macs. It was amazing. I could open a file, change the font size and print a big poster. I could type up my dad’s lyrics and print them. I could play SimCity. It was a work of art. And so it was signed by the artists that brought it to us:
Peggy Alexio, Colette Askeland, Bill Atkinson, Steve Balog, Bob Belleville, Mike Boich, Bill Bull, Matt Carter, Berry Cash, Debi Coleman, George Crow, Donn Denman, Christopher Espinosa, Bill Fernandez, Martin Haeberli, Andy Hertzfeld, Joanna Hoffman, Rod Holt, Bruce Horn, Hap Horn, Brian Howard, Steve Jobs, Larry Kenyon, Patti King, Daniel Kottke, Angeline Lo, Ivan Mach, Jerrold Manock, Mary Ellen McCammon, Vicki Milledge, Mike Murray, Ron Nicholson Jr., Terry Oyama, Benjamin Pang, Jef Raskin, Ed Riddle, Brian Robertson, Dave Roots, Patricia Sharp, Burrell Smith, Bryan Stearns, Lynn Takahashi, Guy "Bud" Tribble, Randy Wigginton, Linda Wilkin, Steve Wozniak, Pamela Wyman and Laszlo Zidek.
Steve Jobs left to found NeXT. Some, like George Crow, Joanna Hoffman, and Susan Care, went with him. Bud Tribble would become a co-founder of NeXT and then the Vice President of Software Technology after Apple purchased NeXT.
Bill Atkinson and Andy Hertzfeld would go on to co-found General Magic and usher in the era of mobility. One of the best teams ever assembled slowly dwindled away. And the oncoming dominance of Windows in the market took its toll.
It seems like every company has a “lost decade.” Some like Digital Equipment don’t recover from it. Others, like Microsoft and IBM (who has arguably had a few), emerge as different companies altogether. Apple seemed to go dormant after Steve Jobs left. They had changed the world with the Mac. They put swagger and an eye for design into computing. But in the next episode we’ll look at that long hangover, where they were left by the end of it, and how they emerged to become to change the world yet again.
In the meantime, Walter Isaacson weaves together this story about as well as anyone in his book Jobs. Steven Levy brilliantly tells it in his book Insanely Great. Andy Hertzfeld gives some of his stories at folklore.org. And countless other books, documentaries, podcasts, blog posts, and articles cover various aspects as well. The reason it’s gotten so much attention is that where the Apple II was the watershed moment to introduce the personal computer to the mass market, the Macintosh was that moment for the graphical user interface.
Saga II was a program developed in 1960 that automatically wrote screenplays for TV westerns. Outwardly it looks like artificial intelligence, but that's not entirely accurate. Saga has much more in common with CNC software than AI. This episode we take a look at how the same technology that automated manufacturing found it's way into digital westerns, and how numerically controlled mills are remarkably similar to stage plays.
Clips drawn from The Thinking Machine: https://techtv.mit.edu/videos/10268-the-thinking-machine-1961---mit-centennial-film
James and John discuss eBay Finds: Macintosh TV remote control, Think Different crystal, and Apple 1. James opens his Apple VideoPhone Kit, and news includes Apple's quarterly report and Spotify iPod.
James, John and Steve discuss eBay Finds: Power Computing 100, SuperMac C600, and Power Computing Field Vest. Steve from Mac84 introduces his series, "The Rise and Fall of the Macintosh Clones" and news includes the Macintosh birthday.