BASIC Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at the computer that was the history of the BASIC programming language. We say BASIC but really BASIC is more than just a programming language. It’s a family of languages and stands for Beginner’s All-purpose Symbolic Instruction Code. As the name implies it was written to help students that weren’t math nerds learn how to use computers. When I was selling a house one time, someone was roaming around in my back yard and apparently they’d been to an open house and they asked if I’m a computer scientist after they saw a dozen books I’d written on my bookshelf. I really didn’t know how to answer that question We’ll start this story with Hungarian John George Kemeny. This guy was pretty smart. He was born in Budapest and moved to the US with his family in 1940 when his family fled anti-Jewish sentiment and laws in Hungary. Some of his family would go on to die in the Holocaust, including his grandfather. But safely nestled in New York City, he would graduate high school at the top of his class and go on to Princeton. Check this out, he took a year off to head out to Los Alamos and work on the Manhattan Project under Nobel laureate Richard Feynman. That’s where he met fellow Hungarian immigrant Jon Von Neumann - two of a group George Marx wrote about in his book on great Hungarian Emmigrant Scientists and thinkers called The Martians. When he got back to Princeton he would get his Doctorate and act as an assistant to Albert Einstein. Seriously, THE Einstein. Within a few years he was a full professor at Dartmouth and go on to publish great works in mathematics. But we’re not here to talk about those contributions to the world as an all around awesome place. You see, by the 60s math was evolving to the point that you needed computers. And Kemeny and Thomas Kurtz would do something special. Now Kurtz was another Dartmoth professor who got his PhD from Princeton. He and Kemeny got thick as thieves and wrote the Dartmouth Time-Sharing System (keep in mind that Time Sharing was all the rage in the 60s, as it gave more and more budding computer scientists access to those computer-things that prior to the advent of Unix and the PC revolution had mostly been reserved for the high priests of places like IBM. So Time Sharing was cool, but the two of them would go on to do something far more important. In 1956, they would write DARSIMCO, or Dartmouth Simplified Code. As with Pascal, you can blame Algol. Wait, no one has ever heard of DARSIMCO? Oh… I guess they wrote that other language you’re here to hear the story of as well. So in 59 they got a half million dollar grant from the Alfred P. Sloan foundation to build a new department building. That’s when Kurtz actually joined the department full time. Computers were just going from big batch processed behemoths to interactive systems. They tried teaching with DARSIMCO, FORTRAN, and the Dartmouth Oversimplified Programming Experiment, a classic acronym for 1960s era DOPE. But they didn’t love the command structure nor the fact that the languages didn’t produce feedback immediately. What was it called? Oh, so in 1964, Kemeny wrote the first iteration of the BASIC programming language and Kurtz joined him very shortly thereafter. They did it to teach students how to use computers. It’s that simple. And as most software was free at the time, they released it to the public. We might think of this as open source-is by todays standards. I say ish as Dartmouth actually choose to copyright BASIC. Kurtz has said that the name BASIC was chosen because “We wanted a word that was simple but not simple-minded, and BASIC was that one.” The first program I wrote was in BASIC. BASIC used line numbers and read kinda’ like the English language. The first line of my program said 10 print “Charles was here” And the computer responded that “Charles was here” - the second program I wrote just added a second line that said: 20 goto 10 Suddenly “Charles was here” took up the whole screen and I had to ask the teacher how to terminate the signal. She rolled her eyes and handed me a book. And that my friend, was the end of me for months. That was on an Apple IIc. But a lot happened with BASIC between 1964 and then. As with many technologies, it took some time to float around and evolve. The syntax was kinda’ like a simplified FORTRAN, making my FORTRAN classes in college a breeze. That initial distribution evolved into Dartmouth BASIC, and they received a $300k grant and used student slave labor to write the initial BASIC compiler. Mary Kenneth Keller was one of those students and went on to finish her Doctorate in 65 along with Irving Tang, becoming the first two PhDs in computer science. After that she went off to Clarke College to found their computer science department. The language is pretty easy. I mean, like PASCAL, it was made for teaching. It spread through universities like wildfire during the rise of minicomputers like the PDP from Digital Equipment and the resultant Data General Nova. This lead to the first text-based games in BASIC, like Star Trek. And then came the Altair and one of the most pivotal moments in the history of computing, the porting of BASIC to the platform by Microsoft co-founders Bill Gates and Paul Allen. But Tiny BASIC had appeared a year before and suddenly everyone needed “a basic.” You had Commodore BASIC, BBC Basic, Basic for the trash 80, the Apple II, Sinclair and more. Programmers from all over the country had learned BASIC in college on minicomputers and when the PC revolution came, a huge part of that was the explosion of applications, most of which were written in… you got it, BASIC! I typically think of the end of BASIC coming in 1991 when Microsoft bought Visual Basic off of Alan Cooper and object-oriented programming became the standard. But the things I could do with a simple if, then else statement. Or a for to statement or a while or repeat or do loop. Absolute values, exponential functions, cosines, tangents, even super-simple random number generation. And input and output was just INPUT and PRINT or LIST for source. Of course, functional programming was always simpler and more approachable. So there, you now have Kemeny as a direct connection between Einstein and the modern era of computing. Two immigrants that helped change the world. One famous, the other with a slightly more nuanced but probably no less important impact in a lot of ways. Those early BASIC programs opened our eyes. Games, spreadsheets, word processors, accounting, Human Resources, databases. Kemeny would go on to chair the commission investigating Three Mile Island, a partial nuclear meltdown that was a turning point in nuclear proliferation. I wonder what Kemeny thought when he read the following on the Statue of Liberty: Give me your tired, your poor, Your huddled masses yearning to breathe free, The wretched refuse of your teeming shore. Perhaps, like many before and after, he thought that he would breathe free and with that breath, do something great, helping bring the world into the nuclear era and preparing thousands of programmers to write software that would change the world. When you wake up in the morning, you have crusty bits in your eyes and things seem blurry at first. You have to struggle just a bit to get out of bed and see the sunrise. BASIC got us to that point. And for that, we owe them our sincerest thanks. And thank you dear listeners, for your contributions to the world in whatever way they may be. You’re beautiful. And of course thank you for giving me some meaning on this planet by tuning in. We’re so lucky to have you, have a great day!
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at Java. Java is an Indonesian island with over 141 million people. Java man lived there 1.7 million years ago. Wait, wrong java. The infiltration of coffee into the modern world can really trace its roots to ancient coffee forests on the Ethiopian plateau. Sufis in Yemen began importing coffee in the 1400s to make a beverage that would aid in concentration and as a kind of spiritual intoxication. Um, still the wrong java… Although caffeine certainly has a link somewhere, somehow. The history of the Java programming language dates back to early 1991. It all started at Sun Microsystems with the Stealth Project. Patrick Naughton had considered going to NeXT due to limitations in C++ and the C APIs. But he stayed to join Stealth, a secret team of engineers led by a developer Sun picked up from Carnegie Mellon named James Gosling . Stealth was formed to explore new opportunities in the consumer electronics market. This came up when Gosling was writing a program to port software from perf to vax and emulating hardware as many, many, many programers had done before him. I wonder if he realized when he went to build the first Java compiler and the original virtual machine code that would go on to write a dozen books about Java and it would consume most of his professional life. I wonder how much coffee he would have consumed if he had. They soon added Patrick Sheridan to the team. The project was later known as the “Green” project and with the advent of the web, somewhat pivoted into more of a web project. You see, Microsoft and the clones had some runaway success but Apple and other vendors were a factor in the home market. But Sun saw going down market as the future of the company. They added a few more people and rented separate offices in Menlo Park. Lisa Friendly was the first employee in the Java Products Group. Gosling would be lead engineer. John Gage would direct the project. Jonni Kanerva would write Java FAQ1. The team started to build C++ ++ —. Sun founder Bill Joy wanted a language that combined the the best parts of Mesa and C. In 1993, NCSA gave us Mozilla. That Andreessen guy was on the news saying the era of the desktop was over. These brilliant designers knew they needed an embedded application, one that could even be used in a web browser, or an applet. The language was initially called “Oak,” but was later renamed “Java” in 1995, supposedly from a list of random words but really due to massive consumption of coffee imported from the island of Java. By the way, it only aids in concentration up to a point. Then you get jumpy. Like a Halfling. It took the Java team 18 months to develop the first working version. It is unknown how much Java they drank in this time. Between the initial implementation of Oak in the fall of 1992 and the public announcement of Java in the spring of 1995, around 13 people ended up contributing to the design and evolution of the language. They were going to build a language that could sit on top of the operating systems on the market. This would allow them to be platform agnostic. In 1995, the team announced that the evolution of Mosaic, Netscape Navigator, would provide support for Java. Java gave us Write Once, Run Anywhere platform independence. You could run the code on a Mac, on Solaris, or on Windows. Java derives its syntax from C and many of the object oriented features were influenced by C++. Several of Java’s defining characteristics come from—or are responses to—its predecessors. Therefore, Java was meant to build on these and become a simple, object-oriented, distributed, interpreted, robust, secure, architectural neutral, portable, high performance, multithreaded, and dynamic language. Before I forget. The "Mocha Java" blend pairs coffee from Yemen and Java to get a thick, syrupy, and highly caffeinated blend that is often found with a hint of cinnamon or clove. Similar to all other computer language, all innovation in the design of the language was driven by the need to solve a fundamental problem that the preceding languages could not solve. To start, the creation of C is considered by many to have marked the beginning of the modern age of computer languages. It successfully synthesized the conflicting attributes that had so troubled earlier languages. The result was a powerful, efficient, structured language that was relatively easy to learn. It also included one other, nearly intangible aspect: it was a programmer’s language. Prior to the invention of C, computer languages were generally designed either as academic exercises or by bureaucratic committees. C was designed, implemented, and developed by real, working programmers, reflecting how they wanted to write code. Its features were honed, tested, thought about, and rethought by the people who actually used the language. C quickly attracted many followers who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the programmer community. In short, C is a language designed by and for programmers, as is Java. Throughout the history of programming, the increasing complexity of programs has driven the need for better ways to manage that complexity. C++ is a response to that need in C. To better understand why managing program complexity is fundamental to the creation of C++, consider that in the early days of programming, computer programing was done by manually toggling in the binary machine instructions by use of the front panel or punching cards. As long as programs were just a few hundred instructions long, this worked. Then came Assembly and Fortran and then But as programs grew, assembly language was invented so that a programmer could deal with larger, increasingly complex programs by using symbolic representations of the machine instructions. As programs continued to grow, high-level languages were introduced that gave the programmer more tools with which to handle complexity. This gave birth to the first popular programing language; FORTRAN. Though impressive it had its shortcomings as it didn’t encourage clear and easy-to-understand programs. In the 1960s structured programming was born. This is the method of programming championed by languages such as C. The use of structured languages enabled programmers to write, for the first time, moderately complex programs fairly easily. However, even with structured programming methods, once a project reaches a certain size, its complexity exceeds what a programmer can manage. Due to continued growth, projects were exceeding the limits of the structured approach. To overcome this problem, a new way to program had to be invented; it is called object-oriented programming (OOP). Object-oriented programming (OOP) is a programming methodology that helps organize complex programs through the use of inheritance, encapsulation, and polymorphism. In spite of the fact that C is one of the world’s great programming languages, there is still a limit to its ability to handle complexity. Once the size of a program exceeds a certain point, it becomes so complex that it is difficult to grasp as a totality. While the precise size at which this occurs differs, depending upon both the nature of the program and the programmer, there is always a threshold at which a program becomes unmanageable. C++ added features that enabled this threshold to be broken, allowing programmers to comprehend and manage larger programs. So if the primary motivation for creating Java was the need for a platform-independent, architecture-neutral language, it was to create software to be embedded in various consumer electronic devices, such as microwave ovens and remote controls. The developers sought to use a different system to develop the language one which did not require a compiler as C and C++ did. A solution which was easier and more cost efficient. But embedded systems took a backseat when the Web took shape at about the same time that Java was being designed. Java was suddenly propelled to the forefront of computer language design. This could be in the form of applets for the web or runtime-only packages known as Java Runtime Environments, or JREs. At the time, developers had fractured into the three competing camps: Intel, Macintosh, and UNIX. Most software engineers stayed in their fortified boundary. But with the advent of the Internet and the Web, the problem that the portability of software between platforms suddenly got important in ways it hadn’t been since the forming of ARPANET. Even though many platforms are attached to the Internet, users would like them all to be able to run the same program. What was once an irritating but low-priority problem had become a high-profile necessity. The team realized this pressing need and later made the switch to refocus Java from embedded, consumer electronics to Internet programming. So while the desire for an architecture-neutral programming language provided the initial spark, the Internet ultimately led to Java’s large-scale success. So if Java derives much of its character from C and C++, this is by intent. The original designers knew that using familiar syntax would make their new language appealing to legions of experienced C/C++ programmers. Java also shares some of the other attributes that helped make C and C++ successful. Java was designed, tested, and refined by real, working programmers. Not scientists. Java is a programmer’s language. Java is also cohesive and logically consistent. If you program well, your programs reflect it. If you program poorly, your programs reflect that, too. Put differently, Java is not a language with training wheels. It is a language for professional programmers. Java 1 would be released in 1996 for Solaris, Windows, Mac, and Linux. It was released as the Java Development Kit, or JDK, and to this day we still refer to the version we’re using as JDK 11. Version 2, or 1.2 came in 1998 and with the rising popularity we had a few things that the burgeoning community needed. These included event listeners, Just In Time compilers, and change thread synchronizations. 1.3, code named Kestrel came in 2000, bringing RMI for CORBA compatibility, synthetic proxy classes, the Java Platform Debugger Architecture, Java Naming and Directory Interface in core libraries, the HostSpot JVM, and Java Sound. Merlin, or 1.4 came in 2002 bringing the frustrating regular expressions, native XML processing, logging, Non-Blocking I/O, and SSL. Tiger, or 1.5 came in 2004. This was important. We could autobox, get compile time type safety in generics, static import the static part of a class, annotations for declarative programming, and run time libraries were mapped into memory - a huge improvements to how JVMs work. Java 5 also gave us the version number change. So JDK 1.5 was officially recognized as Java 5. JDK 1.6, or Mustang, came in 2006. This was a big update, bringing monitoring and management tools, compiler access gave us programmatic access to javac and pluggable annotations allowed us to analyze code semantically as a step before javac compiles the code. WebStart got a makeover and SE 6 unified plugins with webstart. Enhanced XML services would be important (at least until he advent of son) and you could mix javascript up with Java. We also got JDBC 4, Character Large Objects, SwingWorker, JTable, better SQL datatypes, native PKI, Kerberos, LDAP, and honestly the most important thing was that it was stable. Although I’ve never written code stable enough to encounter their stability issues… Not enough coffee I suppose. Sun purchased Oracle in 2009. Wait, no, that’s one of my Marvel What If comic book fantasies where the world was a better place. Oracle bought Sun in 2009. After ponying up $5.6 billion dollars, Oracle had a lot of tech based on Sun products and seeing Sun as an increasingly attractive acquisition target by other companies, Oracle couldn’t risk someone else swooping in and buying Sun. With all the turmoil created, it took 5 years during a pretty formative time on the web, but we finally got Dolphin, or 1.7, which came in 2011 and gave us compressed, 64-bit pointers, strings in switch statements, the ability to make a binary integer and use underscores in literals, better graphics APIs, more cryptography algorithms, and a new I/O library that gave even better platform compatibilities. Spider, or 1.8, came along in 2014. We got the ability to Launch JavaFX application Jars, statically-linked JNI libraries, a new date an time API, annotation for java types, unsigned integer arithmetic, a JavaScript runtime that allowed us to embed Javascript code in apps - whether this is a good idea or not is still tbd. Lambda functions had been dropped in Java 7 so here we also got lambda expressions. And this kickstarted a pretty interesting time in the development of Java. We got 9 in 2017, 10 and 11 in 2018, 12, 13, and 14 in 2019. Of these, only 8 and 11 are LTS, or commercial Long Term Support releases, basically meaning we got the next major release after 8 in 2018 and according to my trend line should expect the next LTS in 2021 or 2022. JDK 13, when released later in 2019, will give us text blocks, Switch Expressions, improved memory management by returning unused heap memory to the OS, improves application class and data sharing, and brings back the legacy socket API. But it won’t likely be an LTS release. Today there are over 45 billion active Java Virtual Machines and java remains arguably the top language for micro service, ci/cd environments, and a number of other use cases. Other languages have come. Other languages have gone. Many are better in their own right. Some are not. Java is not perfect. It was meant to reduce complexity. But as languages evolve they become more complex. A project with a million lines of code is monolithic and probably incorporates plugins or frameworks like spring security as an example, that make code even more complex. But Java is meant to reduce cyclomatic complexity, to allow for a language that is simple enough for a professional to pick up quickly and only be as complex as the quality of the code being compiled. I don’t personally love Java. I respect it. And I adore high-quality programmers and their code in any language. But I’ve had to redo so much work because other languages have come and gone over the years that if I were to be starting a new big monolithic web-app today, I’d probably use Java every time. Which isn’t to say that Java isn’t useful in micro-service architectures. According to what’s required from the contract testing on a service, I might use Java, Go, node, python or even the formerly hipster Ruby. Although I don’t love drinking PBR… If I’m writing an Android app, I need to know Java. No matter what the lawyers say. If I’m planning on an enterprise webapp, Java needs to be in the conversation. But usually, I can do the work in a fraction of the time using something like python. But most big companies speak Java. And for good reason. Because of the write once run anywhere approach and the level of permissions a JRE needs, there have been security challenges with running Java on desktop computers. Apple deprecated Java on the Mac in 2010. Users could still instal lications and is the gold standard for those. I’m certainly not advocating going back to the 90s and running Java apps on our desktops any more. No matter what you think of Java, one thing you have to admit, the introduction of the language and the evolution have had a substantial impact on the IT industry and it will continue to do so. A great takeaway here might be that there’s always a potential alternative that might be better suited for a given task. But when it comes to choosing a platform that will be there in a decade or 3, getting support, getting a team that can scale, sometimes you might end up using a solution that doesn’t immediately seem as well suited to a need. But it can get the job done. As it’s been doing since James Gosling and the rest of the team started the project back in the early 90s. So thank you listeners, for sticking with us through this episode of the History of Computing Podcast. We’re lucky to have you.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover the first real object-oriented programming language, Smalltalk. Many people outside of the IT industry would probably know the terms Java, Ruby, or Swift. But I don’t think I’ve encountered anyone outside of IT that has heard of Smalltalk in a long time. And yet… Smalltalk influenced most languages in use today and even a lot of the base technologies people would readily identify with. As with PASCAL from Episode 3 of the podcast, Smalltalk was designed and created in part for educational use, but more so for constructionist learning for kids. Smalltalk was first designed at the Learning Research Group (LRG) of Xerox PARC by Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, Scott Wallace, and others during the 1970s. Alan Kay had coined the term object-oriented programming was coined by Alan Kay in the late 60s. Kay took the lead on a project which developed an early mobile device called the Dynabook at Xerox PARC, as well as the Smalltalk object-oriented programming language. The first release was called Smalltalk-72 and was really the first real implementation of this weird new programming philosophy Kay had called object-oriented programming. Although… Smalltalk was inspired by Simula 67, from Norwegian developers Kirsten Nygaard and Ole-johan Dahl. Even before that Stewart Nelson and others from MIT had been using a somewhat object oriented model when working on Lisp and other programs. Kay had heard of Simula and how it handled passing messages and wrote the initial Smalltalk in a few mornings. He’d go on work with Dan Ingalls to help with implementation and Adele Goldberg to write documentation. This was Smalltalk 71. Object oriented program is a programming language model where programs are organized around data, also called objects. This is a contrast to programs being structured around functions and logic. Those objects could be data fields, attributes, behaviors, etc. For example, a product you’re selling can have a sku, a price, dimensions, quantities, etc. This means you figure out what objects need to be manipulated and how those objects interact with one another. Objects are generalized as a class of objects. These classes define the kind of data and the logic used when manipulating data. Within those classes, there are methods, which define the logic and interfaces for object communication, known as messages. As programs grow and people collaborate on them together, an object-oriented approach allows projects to more easily be divided up into various team members to work on different parts. Parts of the code are more reusable. The way programs are played out is more efficient. And in turn, the code is more scalable. Object-oriented programming is based on a few basic principals. These days those are interpreted as encapsulation, abstraction, inheritance, and polymorphism. Although to Kay encapsulation and messaging are the most important aspects and all the classing and subclassing isn’t nearly as necessary. Most modern languages that matter are based on these same philosophies, such as java, javascript, Python, C++, .Net, Ruby. Go, Swift, etc. Although Go is arguably not really object-oriented because there’s no type hierarchy and some other differences, but when I look at the code it looks object-oriented! So there was this new programming paradigm emerging and Alan Kay really let it shine in Smalltalk. At the time, Xerox PARC was in the midst of revolutionizing technology. The MIT hacker ethic had seeped out to the west coast with Marvin Minsky’s AI lab SAIL at Stanford and got all mixed into the fabric of chip makers in the area, such as Fairchild. That Stanford connection is important. The Augmentation Research Center is where Engelbart introduced the NLS computer and invented the Mouse there. And that work resulted in advances like hypertext links. In the 60s. Many of those Stanford Research Institute people left for Xerox PARC. Ivan Sutherland’s work on Sketchpad was known to the group, as was the mouse from NLS, and because the computing community that was into research was still somewhat small, most were also aware of the graphic input language, or GRAIL, that had come out of Rand. Sketchpad's had handled each drawing elements as an object, making it a predecessor to object-oriented programming. GRAIL ran on the Rand Tablet and could recognize letters, boxes, and lines as objects. Smalltalk was meant to show a dynamic book. Kinda’ like the epub format that iBooks uses today. The use of similar objects to those used in Sketchpad and GRAIL just made sense. One evolution led to another and another, from Lisp and the batch methods that came before it through to modern models. But the Smalltalk stop on that model railroad was important. Kay and the team gave us some critical ideas. Things like overlapping windows. These were made possibly by the inheritance model of executions, a standard class library, and a code browser and editor. This was one of the first development environments that looked like a modern version of something we might use today, like an IntelliJ or an Eclipse for Java developers. Smalltalk was the first implementation of the Model View Controller in 1979, a pattern that is now standard for designing graphical software interfaces. MVC divides program logic into the Model, the View, and the Controller in order to separate internal how data is represented from how it is presented as decouples the model from the view and the controller allow for much better reuse of libraries of code as well as much more collaborative development. Another important thing happened at Xerox in 1979, as they were preparing to give Smalltalk to the masses. There are a number of different interpretations to stories about Steve Jobs and Xerox PARC. But in 1979, Jobs was looking at how Apple would evolve. Andy Hertzfeld and the original Mac team were mostly there at Apple already but Jobs wanted fresh ideas and traded a million bucks in Apple stock options to Xerox for a tour of PARC. The Lisa team came with him and got to see the Alto. The Alto prototype was part of the inspiration for a GUI-based Lisa and Mac, which of course inspired Windows and many advances since. Smalltalk was finally released to other vendors and institutions in 1980, including DEC, HP, Apple, and Berkely. From there a lot of variants have shown up. Instantiations partnered with IBM and in 1984 had the first commercial version at Tektronix. A few companies tried to take SmallTalk to the masses but by the late 80s SQL connectivity was starting to add SQL support. The Smalltalk companies often had names with object or visual in the name. This is a great leading indicator of what Smalltalk is all about. It’s visual and it’s object oriented. Those companies slowly merged into one another and went out of business through the 90s. Instantiations was acquired by Digitalk. ParcPlace owed it’s name to where the language was created. The biggest survivor was ObjectShare, who was traded on NASDAQ, peaking at $24 a share until 1999. In a LA Times article: “ObjectShare Inc. said its stock has been delisted from the Nasdaq national market for failing to meet listing requirements. In a press release Thursday, the company said it is appealing the decision.” And while the language is still maintained by companies like Instantiations, in the heyday, there was even a version from IBM called IBM VisualAge Smalltalk. And of course there were combo-language abominations, like a smalltalk java add on. Just trying to breathe some life in. This was the era where Filemaker, Foxpro, and Microsoft Access were giving developers the ability to quickly build graphical tools for managing data that were the next generation past what Smalltalk provided. And on the larger side products like JDS, Oracle, Peoplesoft, really jumped to prominence. And on the education side, the industry segmented into learning management systems and various application vendors. Until iOS and Google when apps for those platforms became all the rage. Smalltalk does live on in other forms though. As with many dying technologies, an open source version of Smalltalk came along in 1996. Squeak was written by Alan Kay, Dan Ingalls, Ted Kaehler, Scott Wallace, John Maloney, Andreas Raab, Mike Rueger and continues today. I’ve tinkerated with Squeak here and there and I have to say that my favorite part is just getting to see how people who actually truly care about teaching languages to kids. And how some have been doing that for 40 years. A great quote from Alan Kay, discussing a parallel between Vannevar Bush’s “As We May Think” and the advances they made to build the Dynabook: If somebody just sat down and implemented what Bush had wanted in 1945, and didn't try and add any extra features, we would like it today. I think the same thing is true about what we wanted for the Dynabook. There’s a direct path with some of the developers of Smalltalk to deploying MacBooks and Chromebooks in classrooms. And the influences these more mass marketed devices have will be felt for generations to come. Even as we devolve to new models from object-oriented programming, and new languages. The research that went into these early advances and the continued adoption and research have created a new world of teaching. At first we just wanted to teach logic and fundamental building blocks. Now kids are writing code. This might be writing java programs in robotics classes, html in Google Classrooms, or beginning iOS apps in Swift Playgrounds. So until the next episode, think about this: Vannevar Bush pushed for computers to help us think, and we have all of the worlds data at our fingertips. With all of the people coming out of school that know how to write code today, with the accelerometers, with the robotics skills, what is the next stage of synthesizing all human knowledge and truly making computers help with As we may think. So thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!
Visual Basic Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to cover an important but often under appreciated step on the path to ubiquitous computing: Visual Basic. Visual Basic is a programming language for Windows. It’s in most every realistic top 10 of programming languages of all time. It’s certainly split into various functional areas over the last decade or so, but it was how you did a lot of different tasks in Windows automation and programming for two of the most important decades through a foundational period of the PC movement. But where did it come from? Let’s go back to 1975. This was a great year. The Vietnam War ended, Sony gave us Betamax, JVC gave us VHS. Francisco Franco died. I don’t wish ill on many, but if I could go back in time and wish ill on him, I would. NASA launched a joint mission with the Soviet Union. The UK voted to stay the EU. Jimmy Hoffa disappears. And the Altair ships. Altair Basic is like that lego starter set you buy your kid when you think they’re finally old enough to be able to not swallow the smallest pieces. From there, you buy them more and more, until you end up stepping on those smallest pieces and cursing. Much as I used to find myself frequently cursing at Visual Basic. And such is life. Or at least, such is giving life to your software ideas. No matter the language, there’s often plenty of cursing. So let’s call the Altair a proto-PC. It was underpowered, cheap, and with this Microsoft Basic programming language you could, OMG, feed it programs that would blink lights, or create early games. That was 1978. And based largely on the work of John Kemeny and Thomas Kurts, the authors of the original BASIC in 1964, at Dartmouth College. As the PC revolution came, BASIC was popular on the Apple II and original PCs with QuickBASIC coming in 1985, and an IDE, or Integrated Development Environment, for QuickBASIC shipped in 2.0. At the time Maestro was the biggest IDE in use, but they’d been around since Microsoft released the first in 1974. Next, you could compile these programs into DOS executables, or .exe files in 3.0 and 4.0 brought debugging in the IDE. Pretty sweet. You could run the interpreter without ever leaving the IDE! No offense to anyone but Apple was running around the world pitching vendors to build software for the Mac, but had created an almost contentious development environment. And it showed from the number of programs available for the Mac. Microsoft was obviously investing heavily in enabling developers to develop in a number of languages and it showed; Microsoft had 4 times the software titles. Many of which were in BASIC. But the last version of QuickBASIC as it was known by then came in 4.5, in 1988, the year the Red Army withdrew from Afghanistan - probably while watching Who Framed Roger Rabbit on pirated VHS tapes. But by the late 80s, use began to plummet. Much as my daughters joy of the legos began to plummet when she entered tweenhood. It had been a huge growth spurt for BASIC but the era of object oriented programming was emerging. But Microsoft was in an era of hyper growth. Windows 3.0 - and what’s crazy is they were just entering the buying tornado. 1988, the same year as the final release of QuickBASIC, Alan Cooper created a visual programming language he’d been calling Ruby. Now, there would be another Ruby later. This language was visual and Apple had been early to the market on Visual programming, with the Mac - introduced in 1984. Microsoft had responded with Windows 1.0 in 1985. But the development environment just wasn’t very… Visual. Most people at the time used Windows to open a Window of icky text. Microsoft leadership knew they needed something new; they just couldn’t get it done. So they started looking for a more modern option. Cooper showed his Ruby environment to Bill Gates and Gates fell in love. Gates immediately bought the product and it was renamed to Visual Basic. Sometimes you build, sometimes you partner, and sometimes you buy. And so in 1991, Visual Basic was released at Comdex in Atlanta, Georgia and came around for DOS the next year. I can still remember writing a program for DOS. They faked a GUI using ASCII art. Gross. VB 2 came along in 1992, laying the foundations for class modules. VB 3 came in 93 and brought us the JET database engine. Not only could you substantiate an object but you had somewhere to keep it. VB 4 came in 95 because we got a 32-bit option. That adds a year or 6 for every vendor. The innovations that Visual Basic brought to Windows can still be seen today. VBX and DLL are two of the most substantial. A DLL is a “dynamic link library” file that holds code and procedures that Windows programs can then consume. DLL allow multiple programs to use that code, saving on memory and disk space. Shared libraries are the cornerstone of many an object-oriented language. VBX isn’t necessarily used any more as they’ve been replaced with OCXs but they’re similar and the VBX certainly spawned the innovation. These Visual Basic Extensions, or VBX for short, were C or C++ components that were assembled into an application. When you look at applications you can still see DLLs and OCXs. VB 4 was when we switched from VBX to OCX. VB 5 came in 97. This was probably the most prolific, both for software you wanted on your computer and malware. We got those crazy ActiveX controls in VB 5. VB 6 came along in 1998, extending the ability to create web apps. And we sat there for 10 years. Why? The languages really started to split with the explosion of web tools. VBScript was put into Active Server Pages . We got the .NET framework for compiled web pages. We got Visual Basic for Applications, allowing Office to run VB scripts using VBA 7. Over the years the code evolved into what are now known as Unified Windows Platform apps, written in C++ with WinRT or C++ with CX. Those shared libraries are now surfaced in common APIs and sandboxed given that security and privacy have become a much more substantial concern since the Total Wave of the Internet crashed into our lego sets, smashing them back to single blocks. Yah, those blocks hurt when you step on them. So you look for ways not to step on them. And controlling access to API endpoints with entitlements is a pretty good way to walk lightly. Bill Gates awarded Cooper the first “Windows Pioneer Award” for his work on Visual Basic. Cooper continued to consult with companies, with this crazy idea of putting users first. He was an earlier proponent of User Experience and putting users first when building interfaces. In fact, his first book was called “About Face: The Essentials of User Interface Design.” That was published in 1995. He still consults and trains on UX. Honestly, Alan Cooper only needs one line on his resume: “The Father of Visual Basic.” Today Eclipse and Visual Studio are the most used IDEs in the world. And there’s a rich ecosystem of specialized IDEs. The IDE gives code completion, smart code completion, code search, cross platform compiling, debugging, multiple language support, syntax highlighting, version control, visual programming, and so much more. Much of this isn’t available on every platform or for every IDE, but those are the main features I look for - like the first time I cracked open IntelliJ. The IDE is almost optional in functional programming - but In an era of increasingly complex object-oriented programming where classes are defined in hundreds or thousands of itty bitty files, a good, smart, feature-rich IDE is a must. And Visual Studio is one of the best you can use. Given that functional programming is dead, there’s no basic remaining in any of the languages you build modern software in. The explosion of object-orientation created flaws in operating systems, but we’ve matured beyond that and now get to find all the new flaws. Fun right? But it’s important to think, from Alan Kay’s introduction of Smalltalk in 1972, new concepts in programming in programming had been emerging and evolving. The latest incarnation is the API-driven programming methodology. Gone are the days when we accessed memory directly. Gone are the days when the barrier of learning to program was understanding functional and top to bottom syntax. Gone are the days when those Legos were simple little sets. We’ve moved on to building Death Stars out of legos with more than 3500 pieces. Due to increasingly complex apps we’ve had to find new techniques to keep all those pieces together. And as we did we learned that we needed to be much more careful. We’ve learned to write code that is easily tested. And we’ve learned to write code that protects people. Visual Basic was yet another stop towards the evolution to modern design principals. We’ve covered others and we’ll cover more in coming episodes. So until next time, think of the continuing evolution and what might be next. You don’t have to be in front of it, but it does help to have a nice big think on how it can impact projects you’re working on today. So thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at Pong. In the beginning there was Pong. And it was glorious! Just think of the bell bottoms at Andy Capp’s Tavern in Sunnyvale, California on November 29th 1972. The first Pong they built was just a $75 black and white tv from a Walgreens and some cheap parts. The cabinet wasn’t even that fancy. And after that night, the gaming industry was born. It started with people starting to show up and play the game. They ended up waiting for the joint to open, not drinking, and just gaming the whole time. The bartender had never seen anything like it. I mean, just a dot being knocked around a screen. But it was social. You had to have two players. There was no machine learning to play the other side yet. Pretty much the same thing as real ping pong. And so Pong was released by Atari in 1972. It reminded me of air hockey the first time I saw it. You bounced a ball off a wall and tried to get it past the opponent using paddles. It never gets old. Ever. That’s probably why of all the Atari games at the arcade, more quarters got put into it than any. The machines were sold for three times the cost to produce them; unheard of at the time. The game got popular, that within a year, the company had sold 2,500 , which they tripled in 1974. I wasn’t born yet. But I remember my dad telling me that they didn’t have a color tv yet in 72. They’d manufactured the games in an old skate rink. And they were cheap because with the game needing so few resources they pulled it off without a CPU. But what about the code? It was written by Al Alcorn as a training exercise that Nolan Bushnell gave him after he was hired at Atari. He was a pretty good hire. It was supposed to be so easy a kid could play it. I mean, it was so easy a kid could play it. Bushnell would go down as the co-creator of Pong. Although maybe Ralph Baer should have as well, given that Bushnell tested his table tennis game at a trade show the same year he had Alcorn write Pong. Baer had gotten the idea of building video games while working on military systems at a few different electronics companies in the 50s and even patented a device called the Brown Box in 1973, which was filed in 1971 prior to licensing it to Magnavox to become the Odyssey. Tennis for Two had been made available in 1958. Spacewar! had popped up in 1962 , thanks to MIT’s Steven “Slug” Russel’s being teased until he finished it. It was initially written on the TX-0 and was ported to the PDP, slowly making its way across the world as the PDP was shipping. Alan Kotok had whipped up some sweet controllers, but it could be played with just the keyboard as well. No revolution seemed in sight yet as it was really just shipping to academic institutions. And to very large companies. The video game revolution was itching to get out. People were obsessed with space at the time. Space was all over science fiction, there was a space race being won by the United States, and so Spacewar gave way to Computer Space, the first arcade game to ship, in 1971, modeled after Spacewar!. But as an early coin operated video game it was a bit too complicated. As was Galaxy Game, whipped up in 1971 by Bushnell and cofounder Ted Dabney, who’s worked together at Ampex. They initially called their company Syzygy Engineering but as can happen, there was a conflict on that trademark and they changed the name to Atari. Atari had programmed Galaxy Game, but it was built and distributed by Nutting Associates. It was complex and needed a fair amount of instructions to get used to it. Pong on the other hand needed no instructions. A dot bounced from you to a friend and you tried to get it past the other player. Air hockey. Ping pong. Ice hockey. Football. It just kinda’ made sense. You bounced the dot off a paddle. The center of each returned your dot at a regular 90 degree angle and the further out you got, the smaller that angle. The ball got faster the longer the game went on. I mean, you wanna’ make more quarters, right?!?! Actually that was a bug, but one you like. They added sound effects. They spent three months. It was glorious and while Al Alcorn has done plenty of great stuff in his time in the industry I doubt many have been filled with the raw creativity he got to display during those months. It was a runaway success. There were clones of Pong. Coleco released Telestar and Nintendo came out with Color TV Game 6. In fact, General Instruments just straight up cloned the chip. Something else happened in 1972. The Magnavox Odyssey shipped and was the first console with interchangeable dice. After Pong, Atari had pumped out Gotcha, Rebound, and Space Race. They were finding success in the market. Then Sears called. They wanted to sell Pong in the home. Atari agreed. They actually outsold the Odyssey when they finally made the single-game console. Magnavox sued, claiming the concept had been stolen. They settled for $700k. Why would they settle? Well, they could actual prove that they’d written the game first and make a connection for where Atari got the idea from them. The good, the bad, and the ugly of intellectual property is that the laws exist for a reason. Baer beat Atari to the punch, but he’d go on to co-develop Simon says. All of his prototypes now live at the Smithsonian. But back to Pong. The home version of pong was released in 1974 and started showing up in homes in 1975, especially after the Christmas buying season in 1975. It was a hit, well on its way to becoming iconic. Two years later, Atari released the iconic Atari 2600, which had initially been called the VCS. This thing was $200 and came with a couple of joysticks, a couple of paddles, and a game called Combat. Suddenly games were showing up in homes all over the world. They needed more money to make money and Bushnell sold the company. Apple would become one of the fastest growing companies in US History with their release of the Apple II, making Steve Jobs a quarter of a billion dollars in 1970s money. But Atari ended up selling of units and becoming THE fastest growing company in US history at the time. There were sequels to Pong but by the time Breakout and other games came along, you really didn’t need them. I mean, pin-pong? Pong Doubles was fine but , Super Pong, Ultra Pong, and Quadrapong, never should have happened. That’s cool though. Other games definitely needed to happen. Pac Man became popular and given it wasn’t just a dot but a dot with a slice taken out for a mouth, it ended up on the cover of Time in 1982. A lot of other companies were trying to build stuff, but Atari seemed to rule the world. These things have a pretty limited life-span. The video game crash of 1983 caused Atari to lose half a billion dollars. The stock price fell. At an important time in computers and gaming, they took too long to release the next model, the 5200. It was a disaster. Then the Nintendo arrived in some parts of the world in 1983 and took the US by storm in 1985. Atari went into a long decline that was an almost unstoppable downward spiral in a way. That was sad to watch. I’m sure it was sadder to be a part of. it was even sadder when I studied corporate mergers in college. I’m sure that was even sadder to be a part of as well. Nolan Bushnell and Ted Dabney, the founders of Atari, wanted a hit coin operated game. They got it. But they got way more than they bargained for. They were able to parlay Pong into a short lived empire. Here’s the thing. Pong wasn’t the best game ever made. It wasn’t an original Bushnell idea. It wasn’t even IP they could keep anyone else from cloning. But It was the first successful video game and helped fund the development of the VCS, or 2600, that would bring home video game consoles into the mainstream, including my house. And the video game industry would later eclipse the movie industry. But the most important thing pong did was to show regular humans that microchips were for more than… computing. Ironically the game didn’t even need real microchips. The developers would all go on to do fun things. Bushnell founded Chuck E. Cheese with some of his cresis-mode cash. Once it was clear that the Atari consoles were done you could get iterations of Pong for the Sega Genesis, the Playstation, and even the Nintendo DS. It’s floated around the computer world in various forms for a long, long time. The game is simple. The game is loved. Every time I see it I can’t help but think about bell bottoms. It launched industries. And we’re lucky to have had it. Just like I’m lucky to have had you as a listener today. Thank you so much for choosing to spend some time with us. We’re so lucky to have you.
(OldComputerPods) ©Sean Haas, 2020