Preview Mode Links will not work in preview mode

Dec 21, 2019

Whirlwind Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us to innovate the future! Today we’re going to look at a computer built at the tail end of World War II called Whirlwind. What makes Whirlwind so special? It took us from an era of punch card batch processed computing and into the era of interactive computing. Sometimes the names we end up using for things evolve over time. Your memories are a bit different than computer memory. Computer memory is information that is ready to be processed. Long term memory, well, we typically refer to that as storage. That’s where you put your files. Classes you build in Swift are loaded into memory at runtime. But that memory is volatile and we call it random-access memory now. This computer memory first evolved out of MIT with Whirlwind. And so they came up with what we now call magnetic-core memory in 1955. Why did they need speeds faster than a vacuum tube? Well, it turns out vacuum tubes burn out a lot. And the flip-flop switching they do was cool for payroll. But not for tracking Intercontinental Ballistic Missiles in real time and reacting to weather patterns so you can make sure to nuke the right target. Or intercept one that’s trying to nuke you! And in the middle of the Cold War, that was a real problem. Whirlwind didn’t start off with that mission. When MIT kicked things off, computers mostly used vacuum tubes. But they needed something… faster. Perry Crawford had seen the ENIAC in 1945 and recommended a digital computer to run simulations. They were originally going to train pilots in flight simulation and they had Jay Forrestor start working on it in 1947 ‘cause they needed to train more pilots faster. But as with many a true innovation in computing, this one was funded by the military and saw Forrestor team up with Robert Everett to look for a way to run programs fast. This meant they needed to be stored on the device rather than batch modes run off punch cards that got loaded into the system. They wanted something really wild at the time. They wanted to see things happening on screens. It started with flight simulation, which would later become a popular computer game. But as the Cold War set in, the Navy didn’t need to train pilots quite as fast. Instead, then they wanted to watch missiles traveling over the ocean, and they wanted computers that could be programmed to warn that missiles were in the air and potentially even intercept them. This required processing at speeds unheard of at the time. So they got a military grant for a million bucks a year, brought in 175 people and built a 10 ton computer. And they planned to build 2k of random-access memory. To put things in context, the computer we’re recording on today has 16 gigs of memory, roughly 8,000,000 times more storage. And almost immeasurably faster. Also, cheaper. The Williams Tubes they used at first would cost them $1 per bit per month. None of the ways people usually got memory were working. Flip-flopping circuits took to long, other forms of memory at the time were unreliable. And you know what they say about necessity being the mother of invention. By the end of 1949 the computer could solve an equation and output to an oscilloscope, which were used as monitors before we had… um… monitors. An Wang had researched using magnetic fields to switch currents and Forrester ended up trying to do the same thing, but had to manage the project and so he brought in William Papian and Dudley Buck to test various elements until they could find something that would work as memory. After a couple of years they figured it out, and built a core that was 1024 cores, or 32 x 32. They filed for a patent for it in 1951. Wang also got a patent, as did Jan Rajchman from RCA, although MIT would later dispute that Buck had leaked information to Rajchman. Either way they had the first real memory, which would be used for decades to come! The tubes used for processing in the Whirlwind would end up leading Ken Olson to transistors, which led to the transistorized TX-O (the love of many a tech model railroad clubber) and later to Olson founding DEC. Suddenly, the Whirlwind was the fastest computer of the day. They also worked on the first pointing devices used in computing. Light sensing vacuum tubes had been introduced in the 1930s, so they introduced a pen that could interact with the tubes in the oscilloscopes people used to watch objects moving on the screen. There was an optical sensor in the gun that took input from the light shown on the screen. They used light pens to select an object. Today we use fingers. Those would evolve into the Zapper so we could play Duck Hunt by the 80s but began life in missile defense. Whirlwind would evolve into Whirlwind II, and Forrester would end up fathering the SAGE missile defense system on the technology. SAGE, or Semi-Automatic Ground Environment, would weight 250 tons and be the centerpiece of NORAD, or North American Aerospace Defense Command. Remember the movie War Games? That. Dudley Buck would end up giving us content-addressable memory and helium cooled processors that almost ended up with him inventing the microprocessor. Although many of the things he theorized and built on the way to getting a functional “cryotron” as he called superconductors, would be used in the later production of chips. IBM wanted in on these faster computers. So they paid $500,000 to Wang, who would use that money to found Wang Laboratories, which by the 80s would build word processors and microcomputers. Wang would also build a tablet with email, a phone handset, and a word processing tool called Wang Office. That was the 1990 version of an iPad! After SAGE, Forrester would go on to teach for the Sloan School of Management and come up with system dynamics, the ultimate “what if” system. Basically, after he pushed the boundries of what computers could do, helping us to maybe not end up in a nuclear war, he would push the boundaries of social systems. Whirlwind gave us memory, and tons of techniques to study, produce, and test transistorized computers. And without it, no SAGE, and none of the innovations that exploded out of that program. And probably no TX-0, and therefore PDP-1, and all of the innovations that came out of the minicomputer era. It is a recognizable domino on the way from punch cards to interactive computers. So we owe a special thanks to Forrester, Buck, Olson, Papian, and everyone else who had a hand in it. And I owe a special thanks to you, for tuning into this episode of the History Of Computing Podcast. We’re so lucky to have you. Have a great day!