As I’m sure most people are aware of, computers talk in binary. That is, they only really use 1′s and 0′s to communicate, calculate and let you read the newspaper. Binary is a very primitive and silly way to count, but computers are rather stupid, and only really understand two words: **yes **and **no**, which makes binary a good way to communicate with them. Now you might think I’m over-simplifying this just to make a point, but it really is this simple. Computers only communicate with **YES **and **NO**. On and off. 1 and 0.

*That’s all fine and dandy*, you say, *but it really doesn’t explain how I am able to play absurdly good-looking games over the Internet!* No, it doesn’t. But it’s a good start. So let’s not move on too fast. Like I just mentioned 1 and 0 can be seen as yes and no. But making up big numbers, images etc of just yes and no doesn’t really seem plausible. Which it isn’t. But it also is.

This is an attempt to explain binary in a relatively simple and relaxed way. I know some of you will cringe at the simplifications, but please leave your anal tendencies at home and enjoy the ride instead.

Creating simple numbers with binary ones

So let’s start with numbers. Let’s say I tell the computer to add 9+9. How would I need to tell the computer this using just ones and zeros? Well, the binary system works in such a way that using just two low-value numbers we can create any actual value. This is done by assigning value to the **position** of the number, and not just the number itself. So that means that the first position in a binary number is worth 1, the second position is worth double this amount (2), the third is worth double the amount before that (4) and so on. Does this sound confusing? Let me illustrate it with a nice little picture:

Looking at the picture, we start at the right and count ourselves to the left. The first position has a value of 1. The second position has a value of 2. The third position has a value of 4. And so on up to the eighth position with a value of 128. Now what the 1′s and 0′s do is decide whether or not the value of its position should be added to the grand total, so if we go from right to left once more and add only the values of the ones, it might be something like:

- Should I add 1 to the total value? YES (1)

- Should I add 2 to the total value? YES (1)

- Should I add 4 to the total value? NO (0)

- Should I add 8 to the total value? YES (1)

- Should I add 16 to the total value? YES (1)

- Should I add 32 to the total value? NO (0)

- Should I add 64 to the total value? NO (0)

- Should I add 128 to the total value? YES (1)

This gives us a total value of 1 + 2 + 8 + 16 + 128 = 155.

Now the key here is to understand that using the 8 yes/no switches illustrated above we can represent any number between 0 and 255. For example 3 is 00000011, which translates to:

- Should I add 1 to the total value? YES (1)

- Should I add 2 to the total value? YES (1)

- Should I add 4-128 to the total value? NO (0)

Total value: 1+2 = 3

So the 0′s and 1′s (commonly known as bits) simply represent this type of very basic instructions to our rather stupid computer, whose only real skill is adding numbers together if we tell it to. These bits are commonly grouped together in pairs of 8 like above. 8 bits is what we call a **byte**. So when we say that a file or program is a certain size in megabytes, we’re actually talking about how many 1′s and 0′s it requires to represent something to the computer. For example if I save an empty Microsoft Word document and look at it’s file size, it says 24 kilobytes, which roughly means that a simple, empty Word document needs **192 000** (24 * 1000 * 8) YES/NO instructions in order to be represented to our extremely stupid computer.

From numbers to words

Representing numbers with other numbers is one thing, and it should seem to be an at least slightly logical thing to do. But how do we represent letters and words using numbers? Well, it really isn’t all that different. It does however require a second conceptual step – the translation not only from binary numbers (YES/NO) to “normal” numbers (155), but also from these normal numbers to letters (A). This is essentially done using a cipher, just like the time you as a kid agreed with your friends to shift all the letters in the alphabet one step to the right in order to communicate secretly with each other over written notes (i.e. “douchebag” became “epvdifcbh”). The difference here is, of course, that you don’t use one letter to represent another letter, but you use a number, so for example A could be represented by 1, B by 2, C by 3 and so on. That would mean that the word CAB could be written as 312. This is called an encoding, and examples of real encodings used on computers are ASCII and UTF-8. Encodings bridge the gap between letters and their numerical representation:

As you can see in my masterfully composed work of art above, the binary representation of “normal” numbers gets sent to the encoding, which then checks what letter to produce based on the number it is given. In this case it is given 321, and following the encoding I made up above, this produces the letters CAB, one letter per number. Different encodings require different numbers to produce characters. For example if you want an encoding that is able to produce any (or most) characters known to man in all the different languages that exist all over the world you will need a big amount of different numbers. If you on the other hand only want to produce A-Z plus numbers and a few punctuation characters, you need a whole lot fewer.

The principle for using binary data to represent something else on a computer is generally the same as above. If you want to represent an image, each pixel will have a numerical value representing the colour values. This numerical value will in turn have a binary value. The same general idea applies to representing sound – an encoding which interprets numbers in a certain way to represent pitch, volume and so on.

A short history (and future) lesson

Back in the day, the first computers were operated using punched cards, which look like this:

Here the punched/unpunched hole is a binary representation as well. Like I mentioned earlier, binary doesn’t need to be represented by the numbers 1 and 0. Rather it’s simply something that is either on or off, yes or no. In the card above the punched holes are YES, the unpunched ones are NO. The layout of the card, then, is a way to transform the binary values to something else, in this case numbers by the look of it. Punched cards are now obsolete, but the fact is not much has really changed. The binary representation works exactly the same way – we’re still just telling a stupid machine to either DO or DO NOT. Today computers use electronic signals to represent DO and DO NOT, however in theory we could use anything. Fiber-optic cables use light to transmit the binary signals, and we also have radio waves and so on. The DVD player in your computer bounces laser beams on a plastic disc and detects whether or not there’s a microscopic grove in the area where the laser was shone. There’s even some people who are now using the lamps in their office to transmit wireless data through the faster-than-the-eye flickering of their light. There’s even organic computers being developed, all on the basic principle of binary communication.

And that’s that.