Preview Mode Links will not work in preview mode

Sep 5, 2019

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us for the innovations of the future! Today we’re going to look at the emergence of the cloud. As with everything evil, the origin of the cloud began with McCarthyism. From 1950 to 1954 Joe McCarthy waged a war against communism. Wait, wrong McCarthyism. Crap. After Joe McCarthy was condemned and run out of Washington, **John** McCarthy made the world a better place in 1955 with a somewhat communistic approach to computing. The 1950s were the peak of the military industrial complex. The SAGE air defense system needed to process data coming in from radars and perform actions based on that data. This is when McCarthy stepped in. John, not Joe. He proposed things like allocating memory automatically between programs, quote “Programming techniques can be encouraged which make destruction of other programs unlikely” and modifying FORTRAN to trap programs into specified areas of the storage. When a person loading cards or debugging code, the computer could be doing other things. To use his words: “The only way quick response can be provided at a bearable cost is by time-sharing. That is, the computer must attend to other customers while one customer is reacting to some output.” He posited that this could go from a 3 hour to day and a half turnaround to seconds. Remember, back then these things were huge and expensive. So people worked shifts and ran them continuously. McCarthy had been at MIT and Professor Fernando Corbato from there actually built it between 1961 and 1963. But at about the same time, Professor Jack Dennis from MIT started doing about the same thing with a PDP-1 from DEC - he’s actually probably one of the most influential people many I talk to have never heard of. He called this APEX and hooked up multiple terminals on TX-2. Remember John McCarthy? He and some students actually did the same thing in 1962 after moving on to become a professor at Stanford. 1965 saw Alan Kotok sell a similar solution for the PDP-6 and then as the 60s rolled on and people in the Bay Area got really creative and free lovey, Cobato, Jack Dennis of MIT, a team from GE, and another from Bell labs started to work on Multics, or Multiplexed Information and Computing Service for short, for the GE-645 mainframe. Bell Labs pulled out and Multics was finished by MIT and GE, who then sold their computer business to Honeywell so they wouldn’t be out there competing with some of their customers. Honeywell sold Multics until 1985 and it included symmetric multiprocessing, paging, a supervisor program, command programs, and a lot of the things we now take for granted in Linux, Unix, and macOS command lines. But we’re not done with the 60s yet. ARPAnet gave us a standardized communications platform and distributed computing started in the 60s and then became a branch of computer science later in the late 1970s. This is really a software system that has components stored on different networked computers. Oh, and Telnet came at the tail end of 1969 in RFC 15, allowing us to remotely connect to those teletypes. People wanted Time Sharing Systems. Which led Project Genie at Berkely, TOPS-10 for the PDP-10 and IBM’s failed TSS/360 for the System 360. To close out the 70s, Ken Thompson, Dennis Ritchie, Doug McIllroy, Mike Lesk, Joe Assana, and of course Brian Kernighan at Bell Labs hid a project to throw out the fluff from Multics and build a simpler system. This became Unix. Unix was originally developed in Assembly but Ritchie would write C in 72 and the team would eventually refactor Unix in C. Pretty sure management wasn’t at all pissed when they found out. Pretty sure the Uniplexed Information and Computing Services, or eunuchs for short wasn’t punny enough for the Multics team to notice. BSD would come shortly thereafter. Over the coming years you could create multiple users and design permissions in a way that users couldn’t step on each others toes (or more specifically delete each others files). IBM did something interesting in 1972 as well. They invented the Virtual Machine, which allowed them to run an operating system inside an operating system. At this point, time sharing options were becoming common place in mainframes. Enter Moore’s Law. Computers got cheaper and smaller. Altair and hobbyists became a thing. Bill Joy ported BSD to Sun workstations in 77. Computers kept getting smaller. CP/M shows up on early microcomputers at about the same time up until 1983. Apple arrives on the scene. Microsoft DOS appears in 1981 and and In 1983, with all this software you have to pay for really starting to harsh his calm, Richard Stallman famously set out to make software free. Maybe this was in response to Gates’ 1976 Open Letter to Hobbyists asking pc hobbyists to actually pay for software. Maybe they forgot they wrote most of Microsoft BASIC on DARPA gear. Given that computers were so cheap for a bit, we forgot about multi-user operating systems for awhile. By 1991, Linus Torvalds, who also believed in free software, by then known as open source, developed a Unix-like operating system he called Linux. Computers continued to get cheaper and smaller. Now you could have them on multiple desks in an office. Companies like Novell brought us utility computers we now refer to as servers. You had one computer to just host all the files so users could edit them. CERN gave us the first web server in 1990. The University of Minnesota gave us Gopher in 1991. NTP 3 came in 1992. The 90s also saw the rise of virtual private networks and client-server networks. You might load a Delphi-based app on every computer in your office and connect that fat client with a shared database on a server to, for example, have a shared system to enter accounting information into, or access customer information to do sales activities and report on them. Napster had mainstreamed distributed file sharing. Those same techniques were being used in clusters of servers that were all controlled by a central IT administration team. Remember those virtual machines IBM gave us: you could now cluster and virtualize workloads and have applications that were served from a large number of distributed computing systems. But as workloads grew, the fault tolerance and performance necessary to support them became more and more expensive. By the mid-2000s it was becoming more acceptable to move to a web-client architecture, which meant large companies wouldn’t have to bundle up software and automate the delivery of that software and could instead use an intranet to direct users to a series of web pages that allowed them to perform business tasks. Salesforce was started in 1999. They are the poster child for software as a service and founder/CEO Marc Benioff coined the term platform as a service, allowing customers to build their own applications using the Salesforce development environment. But it wasn’t until we started breaking web applications up and developed methods to authenticate and authorize parts of them to one another using technologies like SAML, introduced in 2002) and OAuth (2006) that we were able to move into a more micro-service oriented paradigm for programming. Amazon and Google had been experiencing massive growth and in 2006 Amazon created Amazon Web Services and offered virtual machines on demand to customers, using a service called Elastic Compute Cloud. Google launched G Suite in 2006, providing cloud-based mail, calendar, contacts, documents, and spreadsheets. Google then offered a cloud offering to help developers with apps in 2008 with Google App Engine. In both cases, the companies had invested heavily in developing infrastructure to support their own workloads and renting some of that out to customers just… made sense. Microsoft, seeing the emergence of Google as not just a search engine, but a formidable opponent on multiple fronts also joined into the Infrastructure as a Service as offering virtual machines for pennies per minute of compute time also joined the party in 2008. Google, Microsoft, and Amazon still account for a large percentage of cloud services offered to software developers. Over the past 10 years the technologies have evolved. Mostly just by incrementing a number, like OAuth 2.0 or HTML 5. Web applications have gotten broken up in smaller and smaller parts due to mythical programmer months meaning you need smaller teams who have contracts with other teams that their service, or micro-service, can specific tasks. Amazon, Google, and Microsoft see these services and build more workload specific services, like database as a service or putting a REST front-end on a database, or data lakes as a service. Standards like OAuth even allow vendors to provide Identity as a service, linking up all the things. The cloud, as we’ve come to call hosting services, has been maturing for 55 years, from shared compute time on mainframes to shared file storage space on a server to very small shared services like payment processing using Stripe. Consumers love paying a small monthly fee for access to an online portal or app rather than having to deploy large amounts of capital to bring in an old-school JDS Uniphase style tool to automate tasks in a company. Software developers love importing an SDK or calling a service to get a process for free, allowing developers to go to market much faster and look like magicians in the process. And we don’t have teams at startups running around with fire extinguishers to keep gear humming along. This reduces the barrier to build new software and apps and democratizes software development. App stores and search engines then make it easier than ever to put those web apps and apps in front of people to make money. In 1959, John McCarthy had said “The cooperation of IBM is very important but it should be to their advantage to develop this new way of using a computer.” Like many new philosophies, it takes time to set in and evolve. And it takes a combination of advances to make something so truly disruptive possible. The time-sharing philosophy gave us Unix and Linux, which today are the operating systems running on a lot of these cloud servers. But we don’t know or care about those because the web provides a layer on top of them that obfuscates the workload. Much as the operating system obfuscated the workload of the components of the system. Today those clouds obfuscate various layers of the stack so you can enter at any part of the stack you want whether it’s a virtual computer or a service or just to consume a web app. And this has lead to an explosion of diverse and innovative ideas. Apple famously said “there’s an app for that” but without the cloud there certainly wouldn’t be. And without you, my dear listeners, there wouldn’t be a podcast. So thank you so very much for tuning into another episode of the History of Computing Podcast. We’re lucky to have you. Have a great day!