'languages' Episodes

BASIC

     11/24/2019

BASIC Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at the computer that was the history of the BASIC programming language. We say BASIC but really BASIC is more than just a programming language. It’s a family of languages and stands for Beginner’s All-purpose Symbolic Instruction Code. As the name implies it was written to help students that weren’t math nerds learn how to use computers. When I was selling a house one time, someone was roaming around in my back yard and apparently they’d been to an open house and they asked if I’m a computer scientist after they saw a dozen books I’d written on my bookshelf. I really didn’t know how to answer that question We’ll start this story with Hungarian John George Kemeny. This guy was pretty smart. He was born in Budapest and moved to the US with his family in 1940 when his family fled anti-Jewish sentiment and laws in Hungary. Some of his family would go on to die in the Holocaust, including his grandfather. But safely nestled in New York City, he would graduate high school at the top of his class and go on to Princeton. Check this out, he took a year off to head out to Los Alamos and work on the Manhattan Project under Nobel laureate Richard Feynman. That’s where he met fellow Hungarian immigrant Jon Von Neumann - two of a group George Marx wrote about in his book on great Hungarian Emmigrant Scientists and thinkers called The Martians. When he got back to Princeton he would get his Doctorate and act as an assistant to Albert Einstein. Seriously, THE Einstein. Within a few years he was a full professor at Dartmouth and go on to publish great works in mathematics. But we’re not here to talk about those contributions to the world as an all around awesome place. You see, by the 60s math was evolving to the point that you needed computers. And Kemeny and Thomas Kurtz would do something special. Now Kurtz was another Dartmoth professor who got his PhD from Princeton. He and Kemeny got thick as thieves and wrote the Dartmouth Time-Sharing System (keep in mind that Time Sharing was all the rage in the 60s, as it gave more and more budding computer scientists access to those computer-things that prior to the advent of Unix and the PC revolution had mostly been reserved for the high priests of places like IBM. So Time Sharing was cool, but the two of them would go on to do something far more important. In 1956, they would write DARSIMCO, or Dartmouth Simplified Code. As with Pascal, you can blame Algol. Wait, no one has ever heard of DARSIMCO? Oh… I guess they wrote that other language you’re here to hear the story of as well. So in 59 they got a half million dollar grant from the Alfred P. Sloan foundation to build a new department building. That’s when Kurtz actually joined the department full time. Computers were just going from big batch processed behemoths to interactive systems. They tried teaching with DARSIMCO, FORTRAN, and the Dartmouth Oversimplified Programming Experiment, a classic acronym for 1960s era DOPE. But they didn’t love the command structure nor the fact that the languages didn’t produce feedback immediately. What was it called? Oh, so in 1964, Kemeny wrote the first iteration of the BASIC programming language and Kurtz joined him very shortly thereafter. They did it to teach students how to use computers. It’s that simple. And as most software was free at the time, they released it to the public. We might think of this as open source-is by todays standards. I say ish as Dartmouth actually choose to copyright BASIC. Kurtz has said that the name BASIC was chosen because “We wanted a word that was simple but not simple-minded, and BASIC was that one.” The first program I wrote was in BASIC. BASIC used line numbers and read kinda’ like the English language. The first line of my program said 10 print “Charles was here” And the computer responded that “Charles was here” - the second program I wrote just added a second line that said: 20 goto 10 Suddenly “Charles was here” took up the whole screen and I had to ask the teacher how to terminate the signal. She rolled her eyes and handed me a book. And that my friend, was the end of me for months. That was on an Apple IIc. But a lot happened with BASIC between 1964 and then. As with many technologies, it took some time to float around and evolve. The syntax was kinda’ like a simplified FORTRAN, making my FORTRAN classes in college a breeze. That initial distribution evolved into Dartmouth BASIC, and they received a $300k grant and used student slave labor to write the initial BASIC compiler. Mary Kenneth Keller was one of those students and went on to finish her Doctorate in 65 along with Irving Tang, becoming the first two PhDs in computer science. After that she went off to Clarke College to found their computer science department. The language is pretty easy. I mean, like PASCAL, it was made for teaching. It spread through universities like wildfire during the rise of minicomputers like the PDP from Digital Equipment and the resultant Data General Nova. This lead to the first text-based games in BASIC, like Star Trek. And then came the Altair and one of the most pivotal moments in the history of computing, the porting of BASIC to the platform by Microsoft co-founders Bill Gates and Paul Allen. But Tiny BASIC had appeared a year before and suddenly everyone needed “a basic.” You had Commodore BASIC, BBC Basic, Basic for the trash 80, the Apple II, Sinclair and more. Programmers from all over the country had learned BASIC in college on minicomputers and when the PC revolution came, a huge part of that was the explosion of applications, most of which were written in… you got it, BASIC! I typically think of the end of BASIC coming in 1991 when Microsoft bought Visual Basic off of Alan Cooper and object-oriented programming became the standard. But the things I could do with a simple if, then else statement. Or a for to statement or a while or repeat or do loop. Absolute values, exponential functions, cosines, tangents, even super-simple random number generation. And input and output was just INPUT and PRINT or LIST for source. Of course, functional programming was always simpler and more approachable. So there, you now have Kemeny as a direct connection between Einstein and the modern era of computing. Two immigrants that helped change the world. One famous, the other with a slightly more nuanced but probably no less important impact in a lot of ways. Those early BASIC programs opened our eyes. Games, spreadsheets, word processors, accounting, Human Resources, databases. Kemeny would go on to chair the commission investigating Three Mile Island, a partial nuclear meltdown that was a turning point in nuclear proliferation. I wonder what Kemeny thought when he read the following on the Statue of Liberty: Give me your tired, your poor, Your huddled masses yearning to breathe free, The wretched refuse of your teeming shore. Perhaps, like many before and after, he thought that he would breathe free and with that breath, do something great, helping bring the world into the nuclear era and preparing thousands of programmers to write software that would change the world. When you wake up in the morning, you have crusty bits in your eyes and things seem blurry at first. You have to struggle just a bit to get out of bed and see the sunrise. BASIC got us to that point. And for that, we owe them our sincerest thanks. And thank you dear listeners, for your contributions to the world in whatever way they may be. You’re beautiful. And of course thank you for giving me some meaning on this planet by tuning in. We’re so lucky to have you, have a great day!


Java: The Programming Language, Not The Island

     9/25/2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we’re going to look at Java. Java is an Indonesian island with over 141 million people. Java man lived there 1.7 million years ago. Wait, wrong java. The infiltration of coffee into the modern world can really trace its roots to ancient coffee forests on the Ethiopian plateau. Sufis in Yemen began importing coffee in the 1400s to make a beverage that would aid in concentration and as a kind of spiritual intoxication. Um, still the wrong java… Although caffeine certainly has a link somewhere, somehow. The history of the Java programming language dates back to early 1991. It all started at Sun Microsystems with the Stealth Project. Patrick Naughton had considered going to NeXT due to limitations in C++ and the C APIs. But he stayed to join Stealth, a secret team of engineers led by a developer Sun picked up from Carnegie Mellon named James Gosling . Stealth was formed to explore new opportunities in the consumer electronics market. This came up when Gosling was writing a program to port software from perf to vax and emulating hardware as many, many, many programers had done before him. I wonder if he realized when he went to build the first Java compiler and the original virtual machine code that would go on to write a dozen books about Java and it would consume most of his professional life. I wonder how much coffee he would have consumed if he had. They soon added Patrick Sheridan to the team. The project was later known as the “Green” project and with the advent of the web, somewhat pivoted into more of a web project. You see, Microsoft and the clones had some runaway success but Apple and other vendors were a factor in the home market. But Sun saw going down market as the future of the company. They added a few more people and rented separate offices in Menlo Park. Lisa Friendly was the first employee in the Java Products Group. Gosling would be lead engineer. John Gage would direct the project. Jonni Kanerva would write Java FAQ1. The team started to build C++ ++ —. Sun founder Bill Joy wanted a language that combined the the best parts of Mesa and C. In 1993, NCSA gave us Mozilla. That Andreessen guy was on the news saying the era of the desktop was over. These brilliant designers knew they needed an embedded application, one that could even be used in a web browser, or an applet. The language was initially called “Oak,” but was later renamed “Java” in 1995, supposedly from a list of random words but really due to massive consumption of coffee imported from the island of Java. By the way, it only aids in concentration up to a point. Then you get jumpy. Like a Halfling. It took the Java team 18 months to develop the first working version. It is unknown how much Java they drank in this time. Between the initial implementation of Oak in the fall of 1992 and the public announcement of Java in the spring of 1995, around 13 people ended up contributing to the design and evolution of the language. They were going to build a language that could sit on top of the operating systems on the market. This would allow them to be platform agnostic. In 1995, the team announced that the evolution of Mosaic, Netscape Navigator, would provide support for Java. Java gave us Write Once, Run Anywhere platform independence. You could run the code on a Mac, on Solaris, or on Windows. Java derives its syntax from C and many of the object oriented features were influenced by C++. Several of Java’s defining characteristics come from—or are responses to—its predecessors. Therefore, Java was meant to build on these and become a simple, object-oriented, distributed, interpreted, robust, secure, architectural neutral, portable, high performance, multithreaded, and dynamic language. Before I forget. The "Mocha Java" blend pairs coffee from Yemen and Java to get a thick, syrupy, and highly caffeinated blend that is often found with a hint of cinnamon or clove. Similar to all other computer language, all innovation in the design of the language was driven by the need to solve a fundamental problem that the preceding languages could not solve. To start, the creation of C is considered by many to have marked the beginning of the modern age of computer languages. It successfully synthesized the conflicting attributes that had so troubled earlier languages. The result was a powerful, efficient, structured language that was relatively easy to learn. It also included one other, nearly intangible aspect: it was a programmer’s language. Prior to the invention of C, computer languages were generally designed either as academic exercises or by bureaucratic committees. C was designed, implemented, and developed by real, working programmers, reflecting how they wanted to write code. Its features were honed, tested, thought about, and rethought by the people who actually used the language. C quickly attracted many followers who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the programmer community. In short, C is a language designed by and for programmers, as is Java. Throughout the history of programming, the increasing complexity of programs has driven the need for better ways to manage that complexity. C++ is a response to that need in C. To better understand why managing program complexity is fundamental to the creation of C++, consider that in the early days of programming, computer programing was done by manually toggling in the binary machine instructions by use of the front panel or punching cards. As long as programs were just a few hundred instructions long, this worked. Then came Assembly and Fortran and then But as programs grew, assembly language was invented so that a programmer could deal with larger, increasingly complex programs by using symbolic representations of the machine instructions. As programs continued to grow, high-level languages were introduced that gave the programmer more tools with which to handle complexity. This gave birth to the first popular programing language; FORTRAN. Though impressive it had its shortcomings as it didn’t encourage clear and easy-to-understand programs. In the 1960s structured programming was born. This is the method of programming championed by languages such as C. The use of structured languages enabled programmers to write, for the first time, moderately complex programs fairly easily. However, even with structured programming methods, once a project reaches a certain size, its complexity exceeds what a programmer can manage. Due to continued growth, projects were exceeding the limits of the structured approach. To overcome this problem, a new way to program had to be invented; it is called object-oriented programming (OOP). Object-oriented programming (OOP) is a programming methodology that helps organize complex programs through the use of inheritance, encapsulation, and polymorphism. In spite of the fact that C is one of the world’s great programming languages, there is still a limit to its ability to handle complexity. Once the size of a program exceeds a certain point, it becomes so complex that it is difficult to grasp as a totality. While the precise size at which this occurs differs, depending upon both the nature of the program and the programmer, there is always a threshold at which a program becomes unmanageable. C++ added features that enabled this threshold to be broken, allowing programmers to comprehend and manage larger programs. So if the primary motivation for creating Java was the need for a platform-independent, architecture-neutral language, it was to create software to be embedded in various consumer electronic devices, such as microwave ovens and remote controls. The developers sought to use a different system to develop the language one which did not require a compiler as C and C++ did. A solution which was easier and more cost efficient. But embedded systems took a backseat when the Web took shape at about the same time that Java was being designed. Java was suddenly propelled to the forefront of computer language design. This could be in the form of applets for the web or runtime-only packages known as Java Runtime Environments, or JREs. At the time, developers had fractured into the three competing camps: Intel, Macintosh, and UNIX. Most software engineers stayed in their fortified boundary. But with the advent of the Internet and the Web, the problem that the portability of software between platforms suddenly got important in ways it hadn’t been since the forming of ARPANET. Even though many platforms are attached to the Internet, users would like them all to be able to run the same program. What was once an irritating but low-priority problem had become a high-profile necessity. The team realized this pressing need and later made the switch to refocus Java from embedded, consumer electronics to Internet programming. So while the desire for an architecture-neutral programming language provided the initial spark, the Internet ultimately led to Java’s large-scale success. So if Java derives much of its character from C and C++, this is by intent. The original designers knew that using familiar syntax would make their new language appealing to legions of experienced C/C++ programmers. Java also shares some of the other attributes that helped make C and C++ successful. Java was designed, tested, and refined by real, working programmers. Not scientists. Java is a programmer’s language. Java is also cohesive and logically consistent. If you program well, your programs reflect it. If you program poorly, your programs reflect that, too. Put differently, Java is not a language with training wheels. It is a language for professional programmers. Java 1 would be released in 1996 for Solaris, Windows, Mac, and Linux. It was released as the Java Development Kit, or JDK, and to this day we still refer to the version we’re using as JDK 11. Version 2, or 1.2 came in 1998 and with the rising popularity we had a few things that the burgeoning community needed. These included event listeners, Just In Time compilers, and change thread synchronizations. 1.3, code named Kestrel came in 2000, bringing RMI for CORBA compatibility, synthetic proxy classes, the Java Platform Debugger Architecture, Java Naming and Directory Interface in core libraries, the HostSpot JVM, and Java Sound. Merlin, or 1.4 came in 2002 bringing the frustrating regular expressions, native XML processing, logging, Non-Blocking I/O, and SSL. Tiger, or 1.5 came in 2004. This was important. We could autobox, get compile time type safety in generics, static import the static part of a class, annotations for declarative programming, and run time libraries were mapped into memory - a huge improvements to how JVMs work. Java 5 also gave us the version number change. So JDK 1.5 was officially recognized as Java 5. JDK 1.6, or Mustang, came in 2006. This was a big update, bringing monitoring and management tools, compiler access gave us programmatic access to javac and pluggable annotations allowed us to analyze code semantically as a step before javac compiles the code. WebStart got a makeover and SE 6 unified plugins with webstart. Enhanced XML services would be important (at least until he advent of son) and you could mix javascript up with Java. We also got JDBC 4, Character Large Objects, SwingWorker, JTable, better SQL datatypes, native PKI, Kerberos, LDAP, and honestly the most important thing was that it was stable. Although I’ve never written code stable enough to encounter their stability issues… Not enough coffee I suppose. Sun purchased Oracle in 2009. Wait, no, that’s one of my Marvel What If comic book fantasies where the world was a better place. Oracle bought Sun in 2009. After ponying up $5.6 billion dollars, Oracle had a lot of tech based on Sun products and seeing Sun as an increasingly attractive acquisition target by other companies, Oracle couldn’t risk someone else swooping in and buying Sun. With all the turmoil created, it took 5 years during a pretty formative time on the web, but we finally got Dolphin, or 1.7, which came in 2011 and gave us compressed, 64-bit pointers, strings in switch statements, the ability to make a binary integer and use underscores in literals, better graphics APIs, more cryptography algorithms, and a new I/O library that gave even better platform compatibilities. Spider, or 1.8, came along in 2014. We got the ability to Launch JavaFX application Jars, statically-linked JNI libraries, a new date an time API, annotation for java types, unsigned integer arithmetic, a JavaScript runtime that allowed us to embed Javascript code in apps - whether this is a good idea or not is still tbd. Lambda functions had been dropped in Java 7 so here we also got lambda expressions. And this kickstarted a pretty interesting time in the development of Java. We got 9 in 2017, 10 and 11 in 2018, 12, 13, and 14 in 2019. Of these, only 8 and 11 are LTS, or commercial Long Term Support releases, basically meaning we got the next major release after 8 in 2018 and according to my trend line should expect the next LTS in 2021 or 2022. JDK 13, when released later in 2019, will give us text blocks, Switch Expressions, improved memory management by returning unused heap memory to the OS, improves application class and data sharing, and brings back the legacy socket API. But it won’t likely be an LTS release. Today there are over 45 billion active Java Virtual Machines and java remains arguably the top language for micro service, ci/cd environments, and a number of other use cases. Other languages have come. Other languages have gone. Many are better in their own right. Some are not. Java is not perfect. It was meant to reduce complexity. But as languages evolve they become more complex. A project with a million lines of code is monolithic and probably incorporates plugins or frameworks like spring security as an example, that make code even more complex. But Java is meant to reduce cyclomatic complexity, to allow for a language that is simple enough for a professional to pick up quickly and only be as complex as the quality of the code being compiled. I don’t personally love Java. I respect it. And I adore high-quality programmers and their code in any language. But I’ve had to redo so much work because other languages have come and gone over the years that if I were to be starting a new big monolithic web-app today, I’d probably use Java every time. Which isn’t to say that Java isn’t useful in micro-service architectures. According to what’s required from the contract testing on a service, I might use Java, Go, node, python or even the formerly hipster Ruby. Although I don’t love drinking PBR… If I’m writing an Android app, I need to know Java. No matter what the lawyers say. If I’m planning on an enterprise webapp, Java needs to be in the conversation. But usually, I can do the work in a fraction of the time using something like python. But most big companies speak Java. And for good reason. Because of the write once run anywhere approach and the level of permissions a JRE needs, there have been security challenges with running Java on desktop computers. Apple deprecated Java on the Mac in 2010. Users could still instal lications and is the gold standard for those. I’m certainly not advocating going back to the 90s and running Java apps on our desktops any more. No matter what you think of Java, one thing you have to admit, the introduction of the language and the evolution have had a substantial impact on the IT industry and it will continue to do so. A great takeaway here might be that there’s always a potential alternative that might be better suited for a given task. But when it comes to choosing a platform that will be there in a decade or 3, getting support, getting a team that can scale, sometimes you might end up using a solution that doesn’t immediately seem as well suited to a need. But it can get the job done. As it’s been doing since James Gosling and the rest of the team started the project back in the early 90s. So thank you listeners, for sticking with us through this episode of the History of Computing Podcast. We’re lucky to have you.


(OldComputerPods) ©Sean Haas, 2020