Category Archives: How I Think Things Work

Wrong Numbers

So I was thinking about trigonometry the other day after I had been tutoring someone about to take a mathematics placement course focusing on the wide net of mathematical animals known a “pre-calculus.” Basically it covers everything from “Explain how many fingers you think you have, and don’t worry, there are no wrong answers, including leaving it blank.” to “Find a polynomial time algorithm for the traveling salesman problem and have it submitted for peer review for the past five years.”

One of my favorite mathematical topics to explain to people is geometry.  I suspect this is because I am a very visual person and I have almost no ability to draw.  Good thing I hardly ever help people with their zoology placement tests.  “OK, let me draw you two slightly different bird species and explain how different evolutionary patters in their lower beak have allowed them to both cooperate and thrive together for thousands of years.”

Right triangles are one of the most talked about objects in geometry.  This, of course, explains why trapizoids are so bitter and jealous.  Take the following triangle. (But remember I “borrowed” it from the wikipedia website, so put it back when you are finished.)

So questions often arise here such as: How do you “know” that the long side of the triangle has the length of the square root of two?  Why not make it something easier like 1 1/2?  And why does it matter anyways? When am I ever going to need a right triangle at a job interview?

Suppose you have a right triangle which has two sides of length 1 and you want to find the length of the unknown side:



My favorite way to prove this is to start by finding the area of this triangle.  (And yes, there are more ways to prove this than there are incorrect proofs about squaring the circle.)  Singe the area is 1/2*b*h we know the area of this triangle is 1/2.  Now imagine we have four of these triangles:



These triangles together have an area of 2.  Now suppose the triangles get rearranged as follows:


So now you have a square with an area of 2.  This means that each side of the square must have the length of the square root of two.  I like this approach because it uses the least number of tools to get the job done. Also, this is the philosophy I use to build my kinetics crafts, but with mixed results.

Then I started thinking of a different approach using a concept called limits.  Suppose we started building a staircase along the unknown length of the triangle.  As we use smaller and smaller steps it starts to look more like a straight line.  We can use limits to see what this would look like as we approach an infinite number of smaller and smaller steps.

stair1 stair2 stair9 stair20 stair100

Each time the steps get smaller, but the total length of the blue line is always two.  Now the big question is: What happens when we use a limit to see what happens as we approach an infinite number of steps?  I’m warning you– this is where some weird shit is going to go down.  If you are standing up, I suggest sitting down. If you are on public transportation, please activate the emergency stop mechanism.  If you are sitting on the toilet, I think you should be OK.

So as we approach the limit of this exercise, the length stays the same at 2, but all the points of the staircase line up exactly with the diagonal line. But at the beginning I told everyone the length was the square root of two, which is somewhere in the neighborhood of 1.4.  So where did the extra 0.6 go?  Rounding error?  Did the dog run off with it?

Honestly, I’m not sure.  First of all, I’ve been a UPS driver for the past 10 years.  My number skills aren’t quite what they used to be.  Eigthly, I hope this goes on to be one of the most discussed mathematical oddities of this generation– somewhere between the “Let’s Make a Deal” dilemma (people have literally written entire books on the subject) and understanding how Leonard is dating Penny on “The Big Bang Theory.”

How Computers Work: Part 9

With all the competing operating systems floating around in the world, it is quite amazing that any productive uses have ever been found for modern-day computers. Imagine, for no particular reason, Bill Gates and Steve Jobs in a seedy downtown bar fighting it out during amateur mud wrestling night. Sure, it can be fun to watch, but when the match is over and the beer is digested very little gets resolved. All that remains is a mildly disturbing image of two pasty white computer geeks cleaning mud from their various nooks and crannies. Despite this divergence in technology, one concept has focused the computer industry on a common goal. No, it’s not the “We Are the World” charity album (which came in a distant second), but the ever present concept of the Internet.

(Note to reader: Make wavy up and down motion with hands to indicate a flashback sequence.)

The birth of the Internet can be traced back to the mid 1960s. It was the middle of the cold war and everyone seemed to be worried about who was next on the Soviet’s invasion list. To make matters worse, they had become quite skilled at building nuclear weapons. And if the situation wasn’t bad enough, Gallagher started his first international fruit smashing comedy tour. With the exception of futuristic space battles and James Earl Jones portraying a large black man, this was clearly an “Empire Strikes Back” time for the United States of America.

Despite being 300 ton monstrosities, computer systems of this era were still quite vulnerable to inter-continental thermo-nuclear warheads. The military was taking extraordinary steps to protect their assets from this new threat. One high ranking government computer specialist went on record saying, “Over my dead body are those commies going to put funny little fur hats on our computers while they reprogram the software to display backwards letter Rs!”

One protective method was to tunnel deep inside granite mountains and place the computer hardware out of harms way in the event of a missile attack. While this approach seemed like a good idea on paper, it turned out the specific mountain they drilled into was also home to an established zoological garden. Filtering out the exotic animal dropping smells proved to be a non-trivial matter.

Since many of the computers in the nation were not located in the immediate vicinity of large granite mountain tops, a more practical solution was needed. While the idea of building portable mountain ranges was kicked around by the government, in the end they decided to connect their computers with really long wires. This allowed independent systems to communicate in the event of a nuclear war. Here is an example of typical electronic exchange of information:

Computer 1: Dude, what’s going on?
Computer 2: Not much—my operator is off watching that Gallagher guy.
Computer 1: How exciting. I don’t mean to be nosy, but has any of your hardware been damaged by a nuclear explosion?
Computer 2: Will you shut up already? You have been asking me that exact same question every 1.5 seconds for the past two years!
Computer 1: I’m sorry– that’s all I’ve been programmed to do.
Computer 2: Okay, fine. I’ve changed my mind. I’ve been completely annihilated by a surprise thermo-nuclear missile attack. What are you going to say now?
Computer 1: Umm… did it hurt?

(Note to reader: Imagine a series of wavy lines of varying frequencies in field of vision to return to the normal “now” time frame.)

Believe it or not, over the years this network of computers grew into the backbone of the modern day Internet. While technically functional, the average Joe on the street had no use for this technology. A few more pieces were needed to complete the puzzle. First of all, personal computers had to start multiplying faster than those evil muppets from the movie “Gremlins.” Finally, a ground-breaking new software program was needed for everyone with access to a phone line and the attention span and intelligence of an average third grader.

The company that first took up this challenge was named Netscape. Starting with little more than a few oversized mallets and a truckload full of produce, Gallagher built the company into an impressive giant by constructing an Internet browser. In an interview after the fact, Gallagher admitted to coming up with the idea after receiving a call from James Earl Jones. “I am your father, Gallagher. Now go and build up an enormous fortune so I can finance my empire of evil. And stop smashing all that fruit– it is wearing a bit thin.”

Once the power of the Internet was fully realized, everyone and their dog needed to have their own web site. In a few short years the Internet went from being completely empty to being chalk full of every imaginable type of web site. Personal, E-commerce, gambling, pornography, and undiscovered comedy writer web sites– the Internet has it all.

How Computers Work: Part 8

Anyone with an advanced degree in Electrical Engineering and decades of hands-on experience in the world of computer design knows that hardware alone is not enough to make a computer function. One theory on how computers work involves groups of small gnomes that run around inside the case using enchanted spells to obey the will of the users. Due to the largely unverifiable and mythical nature of this explanation, it is yet to gain widespread acceptance in the scientific community. A less controversial hypothesis revolves around the concept of a software based operating system.

The need for operating systems first arose when the manufacturers of complex electrical devices realized their products were just too easy to operate. Equipment such as small pocket calculators, Commodore 64s, and Teddy Ruxpin dolls came equipped with a straight forward and easy-to-operate on/off switch. Users turned the machines on, performed the needed operations, and turned them off. The inherent problem with this situation was, of course, that the computer industry only received money from the customer for the initial purchase. Something had to be done to fix this grievous error.

Eventually the computer industry developed the concept of an operating system. Instead of just “being on,” computers would now have to load a software program in order to function correctly. In addition to costing the consumer extra money, this software was constantly being updated. Known problems were fixed, new problems were introduced, and the money kept rolling in.

One of the most popular and commercially successful operating systems is known as Microsoft Windows. Many people claim that the basic “window” concept was stolen from the Apple Macintosh. Of course Apple stole it from Xerox, who conveniently took it from basic Roman architecture. (Incidently, the “arch” style of operating system, while more elegant and able to support massive loads, proved too difficult to implement.) When asked how they felt about the whole situation, the Romans just shrugged their shoulders and mumbled something about having received poor legal advise from their copyright lawyer.

Choosing an operating system is an important decision for anyone who uses a computer on a regular basis. While no system is perfect, the following three options have evolved over the years to meet the various needs of the computer operating public:

Macintosh Operating System: Most people don’t know that the Apple Computer Corporation started out as little more than a garage band. After several noise complaints and a few visits from the local police department, they decided to change the focus from music and become a garage computer company. After releasing the commercially successful “Apple” line of computers, the focus of the company shifted to a new graphic-based operating system. The project, originally code-named “Granny Smith,” was eventually released to the public as the Apple Macintosh.

The simple yet elegant look of the operating system refined over the years has created a fierce loyalty to the Apple product line. (The only notable exception to this rule was the “Newton” hand-held digital personal assistant.) People who use this operating system are usually scared of electronic pointing devices with more than one button and often times can be heard making comments such as, “I can’t use this computer—its beige!”

Linux Operating System: This is the operating system of choice for hard-core computer geeks who like to build their own computers from scratch and anyone who wants to stick it to “the man.” While a relative newcomer in the world of operating systems, Linux was modeled after mainframe Unix systems. Due to an unexplained error in the accounting department, the source code for Linux is available at no charge. Despite being the most stable of all the operating systems for personal computers, many people figure that when something is free it must really suck. People who use Linux generally hope it will eliminate, with extreme prejudice, the competing operating systems in the near future.

Windows Operating System: As another computer company born in a garage, Microsoft has built a vast empire based on the Windows operating system. This operating system has won over countless users with functionality such as the “unscheduled coffee break while the computer reboots” and informative error messages such as “an unknown error has occurred at location 57EE:009B.” Having the largest market share, most people use Windows simply because everyone else is—and everyone can’t be wrong.

What can we expect to see in future versions of operating systems? Apple has just released “Macintosh X” (not to be confused with the recently released Friday the 13th movie, “Jason X.”) Microsoft’s Windows XP includes functionality to collect user’s DNA during the installation process. Rumor has it that the next version will be able to read user’s most personal thoughts. Finally, if everything goes according to plan, Teddy Ruxpin 2.0 will be in stores in time for the Christmas shopping season.

How Computers Work: Part 7

The decade of the 1980s ushered in many new revolutionary changes that affected every person in this country not living in a shack in remote wilderness area of Montana. Some of these changes included witnessing the new found fame of the denim overall (and nothing else) clad rock group Dexy’s Midnight Runners, electing an actor to the office of President of the United States of America, and having a surprisingly large percentage of the world running around screaming, “Where’s the beef?”

While all of these events are important to the evolution of the planet, this decade was witness to one of the most critical single advancements in the computer industry. Without intending any disrespect to the Pac Man stand-up video game, the world was never the same after the introduction of the first Personal Computer.

While various computer systems were available to the general public before the “Personal Computer”, many potential customers were turned off by the disclaimer on the box stating “some assembly required.” For just about any other product in the known world this would mean getting out a Phillips head screw driver and an adjustable wrench. Assembling a computing system of the time required a soldering gun, a high precision metal lathe, and a Masters degree in Electrical Engineering.

IBM changed all of this with the introduction of its Personal Computer. The whole system was already assembled and loaded with the state of the art operating system known as DOS. All that a new user has to do is to take it out of the box, plug it in, and turn on the power switch. It couldn’t be any easier. Or at least that was the theory.

From the hardware perspective, the Personal Computer helped standardize computer parts. Since IBM didn’t want to be in the business of manufacturing every component that went into their systems, they helped create standards. This allowed different components to be swapped in a single system. For example, if you were running out of space on the hard drive, you could go to the computer store and buy a bigger drive. After taking the case off the computer, you simply swap the old and new drives. After getting the case back on you turn on the power only to see a blank screen come up. The next step is to put the old drive back in, only to get the same blank screen when it boots up. Finally, you go to the nearest drinking establishment and order a double shot of whiskey as you come to realize the last six months of work is trapped inside an uncooperative computer component.

Pretty soon there were a few computer component manufactures that got this idea in their heads to build their own Personal Computers. Well, IBM had already seen this coming, and had taken steps to prevent this from happening. They built the Personal Computer around a single chip named BIOS that only IBM manufactured. Without this chip, all the other hardware was not able to talk to each other. In effect, you could not build a Personal Computer unless IBM let you.

This situation is quite similar to the safe guards put in place in the movie, “Jurassic Park” to keep the dinosaurs from reproducing. And we all know how well that worked out. With the exception of countless bad sequels, the exact same thing happened in the computer industry. One of IBM’s rival companies figured out the exact functionality of the BIOS chip and constructed their own version. This processes of reverse engineering opened up the electronic flood gates. Anyone and their dog could now build their own Personal Computer with only the basic understanding of what was happening inside the computer.

While IBM didn’t really seem happy about the entire situation, countless new computer companies were cheerfully popping up overnight. They didn’t all survive the test of time, but companies such as Dell and Compaq expanded and eventually came to dominate the industry. This created fierce competition in the industry. The costs of systems was constantly coming down while their speed and capacity was improving. This behavior benefited consumers by having any system they purchased be obsolete by the time they drove home and took it out of the box.

The development of the Personal Computer changed the way the world looked at electronic devices. For better or worse, everyone had to have a computer to get through their daily lives. Even when they made our lives more complicated it seemed like a good idea at the time to do everything on a computer. Well, that’s all for this week-I’m off to go finish my game of computer solitaire.

How Computers Work: Part 6

While there are many, many ways in which computers have been used to make the world a better place to live, the 1970s was witness to the scientifically verifiable best possible use of this emerging electronic technology. No, I’m not talking about the perfection of the Andy Gibb robot duplicate (which ranked 5th over all), but rather the birth of video games.

Up until this point in time, playing games generally involved social interaction and physical activity. In retrospect, it’s hard to believe that people even bothered with this type of behavior. But this was a time in the history of America when people really were not too concerned with their own health or the general state of the planet. As evidence, many people smoked cigarettes and the Bee Gee’s music was allowed to propagate with little or no government intervention. We didn’t realize back then that the best way to preserve our bodies and minimize physical injury was to sit inside and dedicate large periods of time alone sitting in front of some type of computer controlled output device.

The first commercially successful video game system was named Pong. This simulation was an exact electronic replication of the game of tennis. The only minor components of the sport removed included: rackets, nets, gravity, wind resistance, the third dimension, and of course, Arthur Ashe. And the ball was square instead of spherical. Despite these limitations, the game of Pong was a tremendous success. This goes to show how a well-run marketing department can make or break the release of a new product. The lead computer programmer for the company described the game as, “two sticks that can move up and down bouncing a ball back and forth.” The packaging of the product in stores proclaimed the game of Pong to be, “Virtual reality fourth dimension alien space tennis with real lasers.”

The next major video game system to capture the hearts and minds of the American public was the Atari 2600. Unlike the game of Pong, this setup allowed for different game cartridges to be inserted into the main unit. When people grew tired of their existing game collection, they could just drive out to the nearest retail store and buy a few more.

This system also had the advantage of separating the hardware and the software components of the video game system. Which meant that any Tom, Dick, and Harry could get together in their garage and start making their own video game titles. When this phenomena occurs the results can revolutionize the world. But usually it meant they came out with a few very mediocre titles. While several impressive game titles ran on the Atari 2600, countless forgettable counterparts would sit next to them on the shelves of the store. Unfortunately, consumers had a hard time determining which of these games were worth buying as they all claimed to be some slight variation of “alien space tennis.”

The Atari 2600 era largely ended with the introduction of the Commodore 64. While not exclusively a video game system, this system included a keyboard and optional floppy disk drive. This meant that anyone who owned a Commodore 64 could write their own programs and distribute them on a floppy disk. Potential computer nerds didn’t even need to work from their garage anymore-code could be written from the comfort of their own living rooms without creating a big mess of wires, circuit boards, and duct tape. In addition to rampant unchecked piracy, this system also led to some of the most well designed video games the world has ever seen. I’ll always lovingly remember my Commodore 64, despite the fact that my mom threw it out when I was away in college.

The video game industry has been continually improving their systems to keep up with the demands of consumers. While these “consumers” do not have a centralized leader or clear command structure, intelligence reports indicate they demand games that are colorful, make interesting noises, and inspire them to remain motionless for indefinite periods of time even when it is nice enough to go outside and play. The computational resources needed to operate these games is quite impressive. One recent study reported that if all the processing power from all the computers running video games could be harnessed at once, the resulting system would be powerful enough to master the game of chess, sequence all the DNA of the human race, or locate Jimmy Hoffa. Since that isn’t going to ever happen you might as well go to the store and buy “Ultimate Alien Space Tennis 7.”

How Computers Work: Part 5

After the concepts involved in the Eniac computer were proved to be a success, people started asking a lot of questions about the future of computational devices. “What else can it do?”, “Can it be made smaller than 200 tons?”, and “Does it come in blue?” were just a few of the many, many thoughts people had about the topic.

The 1950s and 1960s were quite exciting times for the development of computers. Successors to the Eniac system allowed researchers to gain valuable insights into mathematical and sociological functions of our world. For example, the companies who won large and profitable government contracts to build and maintain computer systems quickly learned to construct their systems with large panels of blinking lights. While a few of the lights actually corresponded to actual parameters related to the machinery such as “power”, “something is going on inside”, and “an unknown error has occurred at location at 57EE:009B”, most of the lights were designed to blink on and off in such a way that was aesthetically pleasing to the eye.

This functionality proved to be critical when top level defense department officials or members of congress stopped by to see the final results of their considerable expenditures. After a tour of the facilities, the gentlemen would light up their pipes, puff out their chests, and confidently spew out random pleasantries like “Good work men!”, “This is EXACTLY what we need to beat the Commies!”, and “I don’t know about you, Bob, but I think it needs more blue lights.” Eventually the contractors brought in interior decorators during the hardware design phase to coordinate the color schemes of the systems. Some of the individuals who programmed the computers started to develop software that did nothing more than make the lights blink in the most interesting sequence possible.

Eventually blinking light technology reached a limit and computer designers were forced to explore other avenues. An in depth investigation revealed that in addition to changes in light intensity, the human eye responds positively to periodic rotational motion. Armed with this knowledge, computers were enhanced with state-of-the-art tape drives. While containing little, if any, adhesive properties, these devices were used to store and retrieve information on a long and thin strip of material capable of holding a magnetic charge. The constant back-and-forth motion provided a convincing illusion of productivity. Often times the managers of these facilities would be giving tours of the computer facility while the rest of the office was busy in the break room building elaborate paper fortresses with rolls of scotch tape and reams of used continuous feed paper.

In addition to the blinking lights and reel-to-reel tape devices, each generation of computers was becoming smaller and more powerful than its predecessor. The development of the integrated circuit allowed designers to eliminate bulky vacuum tubes. These types of technological advancements allowed for the same amount of computational power to occupy a continually shrinking volume of space. This phenomena is often times referred to as the Carnie Wilson effect.

All of this visual stimulation associated with computing devices led the general public to assume that while computers were useful in some abstract manner, they would eventually become sentient and bent on destroying the human race. While it isn’t mathematically feasible to prove such an event will never happen, many popular films of the era encouraged this concept. One prime example is the movie “2001: A Space Odyssey.”

After successfully sending its crew half way across the solar system, HAL, the talkative onboard computer system, decides to fling the crew into outer space one at a time just because he had nothing better to do. In all reality that is not how computers of the day would have worked. The worst thing that could have happened was the “fling yourself out the airlock one at a time” light would have lit up. Eventually the crew would have realized this was a computer error and not in the best interest of the mission. If this occurred before everyone followed the instructions one of the remaining crew members would have put a small piece of tape over the light and ignored it for the duration of the movie. I believe this would have all been clearly explained if a logistical error during the final editing process hadn’t caused extensive quantities of a completely different film to accidentally replace the intended ending of the movie.

While the 1950s and 1960s were a time of extensive change in the world of computers, the true power of these devices were just beginning to be discovered. Will these machines of our own creation, with their hypnotizing blinking lights and magnetic tape drives, indeed take over the world? The world may never know-unless, perhaps, you are Bill Gates.

How Computers Work: Part 4

The year was 1946- the world was busy with its new, “Can’t we all just get along?” campaign, the United States military was busy building, among other things, the most technologically advanced computational devices the world had ever seen, and the weather seemed, in general, more pleasant than usual. The answer to the first questions is by in large, “No, we can’t all just get along.” The part about the weather turned out to be nothing more than a statistical anomaly. Which leaves the part about constructing computers unexplored. Put your thinking caps on as we prepare to examine this topic in an objective and historically accurate manner.

In order to make this machine sound more like a cute, furry animal and less like a cold blooded killing machine, the people who came up with the idea in the first place decided to call it “Eniac.” While this name sounds somewhat cute and furry, its meaning comes from an old Czechoslovakian phrase that roughly translates to “factory workers with steel shells who attempt to enslave humanity.” The United States built Eniac after identifying a need to calculate the trajectories for their long range thermonuclear weapons.

Once constructed, the military also discovered they could use Eniac to beat the Russians at their own game: tic-tac-toe. After months of tedious programming, the system consistently advised players to always go first and pick the center square. Future versions of Eniac were enhanced to play the game show variations of tic-tac-toe such as “Tic Tac Dough” and “Hollywood Squares.” Some of the general pointers for these games generated by Eniac included, “Caution: Wink Martindale is a robot” and, “Agreeing to appear on Hollywood Squares automatically makes you a loser.”

The heart of the Eniac consisted of thousands of small vacuum tubes that were used to store information while calculations were being performed. While bulky and unreliable compared to the technology available today, these vacuum tubes were a critical component for Eniac to function properly. When a vacuum tube malfunctioned, one of the operators had to locate and replace the tube with a fresh new one. This maintenance consumed quite a bit of the operators time and, by in large, kept them from their favorite activity involving day dreaming of a future where all enemies of the United States could be destroyed with a push of a button.

The process quickly became tiresome and the military eventually hired low paid foreigners to change out the malfunctioning tubes at night. In the meantime, the men and women who built Eniac could focus on the next objective of deciding on the color of the buttons that would be used to fire the missiles their computer was helping aim all around the world. In the end they chose red.

This system created somewhat of a security issue when the mathematicians and computational theorist came into work one day and noticed the 200 ton computer was missing. Naturally the cleaning staff was accused of walking off with the system after everyone else had gone home for the evening. These individuals continually proclaimed their innocence in their native language, which really didn’t do anything to help their cause. In fact, it made them look like raving lunatics-exactly the type of individuals who would steal a state of the art computer. Eventually they were cleared of any and all wrong doing after a complete audit of all the militaries computational devices located the lost piece of equipment. For reasons that have never been completely explained, Eniac was accidentally placed in a seldomly used supply closet.

One rather critical issue with the Eniac computer involved error handling. This system was constructed long before traditional computer screens with the ability to turn completely blue had been invented. To put this time frame into perspective, the top computer scientists of the day were just beginning to coin the phrase “an unknown error has occurred at location 57EE:009B.” Despite incredible advances in the field of computers, much of the behavior of the Eniac system is to this day not completely understood. For example, when an error occurred in a program, the system would calmly and confidently instruct the Navy to launch every long range missile at the five richest kings of Prussia.

Eniac represented a monumental investment in time and money for the United States. Fortunately, World War II was, for the most part, an “away” war that left our nations infrastructure intact. While most other countries in the world were busy rebuilding roads and buildings, we were able to get a head start on the computer craze. Eniac blazed the path for modern day computers. Most importantly, it started an entirely new belief that given enough time every sufficiently powerful computer will eventually do everything in its power once its operators have let their guards down to take over the planet and enslave humanity.