Extinq – LINQ extensions

Every now and then I write some half-weird, irritatingly verbose LINQ code like:

var result = source.Aggregate(new List<List<T>>(), (acc, i) =>
{
    if (!acc.Any() || acc.Last().Count == groupSize)
    {
        acc.Add(new List<T>());
    }
    acc.Last().Add(i);
    return acc;
});

And everytime I do I figure I should get a small github/nuget project going to gather up all these bits and pieces of semi-usefulness.

Well, the last few days I finally did this very thing. It’s not perfect yet, but I actually feel like it’s something I will install for a lot of my projects, which I guess is as much of a guarantee as any that at least I find it useful.

Feel free to read a more detailed description at GitHub: Extinq

Alternatively just Install-Package Extinq in a package manager console close to you.

Why > How

I would definitely say that in programming it’s more important to know why you are doing something, as opposed to how you do it. Obviously this sort of dichotomy is pointless and impractical in reality, and it is of course imperative that you know how to actually do anything at all in order to accomplish things. So let’s instead say:

To become a better programmer, it’s more important to understand the why rather than the how.

Note that I’m not saying “to get work done” or “to get a job as a programmer”. For those two, the how can get you a long way. But to really level up and grok new things, you need to understand why things work and are implemented in a certain way.

I’ve gone through the same ordeal with a lot of different technologies and languages:

  1. Study it
  2. Understand how to use it
  3. Use it
  4. Get stuck on a tough issue
  5. Painstakingly solve the issue
  6. Repeat steps 4-5 a couple of times
  7. Understand why it works the way it does

I would argue that it’s not until step 7 that you not only know the why, but also grasp when the given technology is appropriate to use, which is just as important.

Visual Studio 2012 is great, but…

… after using it for a few months I still can’t help but feel that the UI design messes with my brain’s ability to pick out the different icons quickly. The twenty-shades-of-gray is a bit too unclear for me, compared to the VS2010 icons. I’ve never complained about VS2012 changes and I never will (since it’s a great improvement tool-wise), this is just a fact for me personally. So I figured I’d just fix it up the way I wanted it too. It’s an easy, quick fix and I feel much better working in VS2012 now. The aim is to make it look more like VS2010.

1. Tools -> Extensions and Updates -> Search for “Visual Studio 2012 Color Theme Editor”. Install it.
2. A new menu appears, called “Theme”. Click on it and pick blue. This is the VS10 color scheme. Lovely.
3. Download Visual Studio Icon Patcher.
4. Unzip all its contents into a folder (let’s say C:/vsip)
5. If you have VS2010 still installed, just run vsip.exe from the directory, then run the extract command followed by the inject command. You now have the old icons. Yay! NB: If you want to be able to restore the VS2012 icons, make sure you run backup --version=2012 before injecting the new icons.
6. If you want the VS12 menus to not be in all caps, you can also run the menus command once.

If you don’t have Visual Studio 2010 still installed, the icon switching is a bit more tricky. VSIP assumes you have both currently installed, but it can be ran on computers with either of them installed in order to perform one half of the extract/inject operation:

1. On a computer with VS2010 installed, run vsip.exe extractmode from the command line
2. Run the extract command. This creates a folder named “Images” next to vsip.exe.
3. Copy the Images folder to the computer with VS2012 that you want to patch, put the folder in the vsip.exe folder (e.g. C:/vsip)
4. On the VS2012-computer, run vsip.exe injectmode from the command line, followed by the inject command.
5. All done.

It’s finally worth noting that not all icons will be changed with the above tool. For example the “New Project” and the “Add New Item” ones won’t. I mostly care about the Solution Explorer and Intellisense icons however and those both get switched.

The pain of being *NIX at heart

Real programmers use a magnetized needle and a steady hand.”

I thought I’d take some time to write down my thoughts regarding being a real programmer. It’s an interesting subject, and something I often think about. You should realise that this could have been a post ranting about Windows, OSX and software lockdown. Or the importance of open source. Or command line vs GUI. Or sexy Python scripts vs static leviathans. Or modularity vs “it just works”. Or text editor vs IDE. Or a million other relatively pointless arguments.

But it won’t be. I’ll spare you the pain. Instead I’ll just talk about myself for a while. Slightly sentimental rant ahead.

In my younger teens I got introduced to FreeBSD and Debian by a couple of slightly-older cool kids. I was blown away by the complexity. I was amazed at the things you could do. I was completely and utterly astonished at the power of these operating systems!

No, no, not really. I just wanted to be a Matrix-like hacker and if I did some poking around in config files I could make my computer look like this. But the fact that this was even possible did amaze me. For the first time ever I felt like the computer itself was actually cool. It wasn’t just a facilitator of fun stuff – it was the fun stuff. I realized that the computer could be programmed, that I could not only ask it to perform a predefined set of tasks, but also define these tasks myself. At first it was config files, BASH scripts and stuff like that, but this eventually changed to learning Python and the rest is, well, history.

But wait! There’s more! See, if you fast forward 12 years and join me in present day Sweden, you can see that I almost exclusively do work using Microsoft .NET for everything from websites, to third party API interfacing, internal reporting apps and XML parsing. I run Windows both at home and at work.

There’s a whole lot of this:

“I am currently overviewing the customer’s waterfall specification for this Sharepoint project.”
And very little of this:

“Woah. I know Perl.”



How did this happen? How did I go from an open source-idealizing teenage Linux-user to an unhip .NET yuppie?

The simple truth is: I never really did. I’m still a Linux user at heart. I managed to introduce two Ubuntu servers at work after being there for a few months and they are still running today. They are my little pets. I play with them. They make me happy. I can’t imagine using IRC without screen irssi. But I did grow up. I did learn other languages. I studied Java for a short while in school, then realized that really learning .NET would probably be the best bet for getting a job in this industry. And so I did. And so I got. And here I am, without regret.

Tools. They are tools. You are the craftsman. You are the artist. You are the creator. You are the engineer. You are what matter.

But that doesn’t mean that I don’t suffer. IIS. Remote Desktop. GUI configs in general. Kill me. Or just let me fire up PuTTY and go to my safe place, where no .exe files can hurt me. But it is how it is. I’m not going to spend time complaining. It’s just a fact: The *NIX way of doing things will always feel better to me.

Luckily knowledge and experience isn’t tied to a specific platform. I spent my teenage years fidgeting. Figuring out X. Running a LAMP stack. This has given me a certain way of approaching problems, and it has given me a slight disdain for how Microsoft does stuff at times. I would never use Ajax.BeginForm() unless I was forced too. Why would I? I can write my own damn HTML and JavaScript. Abstraction is good, if you know what you are abstracting away. Otherwise it’s just a shortcut that won’t make you any smarter. And one day you’ll be sitting in an interview only to realize that you have absolutely no idea how to figure out relative URLs in a web application because you have been using Url.Content() for the last three years.

I’m not saying anything of this is actually bad. I use these shortcuts myself, but I also feel generalized knowledge slipping between my fingers when I do. I don’t want to be a “.NET developer on the ASP.NET MVC platform”. I want to be a real programmer, god damnit. Programming has nothing to do with frameworks – and I don’t mean that in a luddite way. Frameworks are great, and I’m not going to propose you build your own web framework in .NET just to be a real programmer. Just don’t settle in for good. Don’t accept the status quo. Try new stuff. If you love Windows, try Linux. If you love Linux, try Windows. If you love C#, try Python. If you love Python, try C. If you love object oriented programming, try Haskell. I guarantee you will become a better programmer by it out. Don’t get complacent! If you do, you know what will happen:

“Peace has cost you your strength! Victory has defeated you!”

Relative graphics, immersion and gameplay

Shiny and realistic-looking graphics are great and can really add to the experience and immersion of a game. However, one shouldn’t assume that good graphics equals shiny and realistic-looking all the time. In fact, I would say that graphics need to be good at what they do, i.e. be designed for the task at hand, rather than optimized for looks per default.

This is Minecraft:

Minecraft

This is Wurm Online:

Wurm Online

They serve as a good reference point for my post for many reasons: they are both considered somewhat indie, and Minecraft is solely developed by Markus “Notch” Persson, who was also a founder of and long-time coder on Wurm Online. They are both written in Java, aimed at an online audience and share similarities in world-shaping abilities, construction, resource gathering etc.

There are two things they are vastly different in, however: level of success and style of graphics. Minecraft’s world is built by square blocks of bright pixels. Wurm Online has that staple “tried and failed” 3D look, even when running on the highest settings possible. Minecraft is also hugely successful, netting Markus Persson not only millions upon millions but also a chance to found a game development studio where his now quite evident talent can hopefully be put to great use. Wurm Online, on the other hand, has been out for years upon years but has never really made it past its niche crowd of sandbox gamers.

I will not argue that the success of either game is based solely on their style of graphics, but I will argue that despite Minecraft’s simplistic approach to graphics, it still looks better than Wurm Online’s attempt at decent 3D, and probably took a whole lot less time to create (not to mention run on a modern-day processor). This I like to think is based on the fact that simple graphics that get the job done are better for immersion and gameplay than advanced graphics that aren’t quite there. Lack of anti-aliasing isn’t a problem in a world of cubes, so to speak.

Using the right style of graphics for a certain job doesn’t only relate to the player experience. It’s also something a developer should be interested in for their own gain. Let’s say Markus Persson had an idea for a game that he really wanted to create, where the focus was the gathering of resources the use of these resources to construct things. While my experience with game-development is fairly limited, I know enough about games to know that creating 3D models, textures, physics and so on for a game is not something to be done over a weekend. So what if the parts of the trees that aren’t cut down will float in the air? It will only be a big issue if you attempt to give the player the experience of being in a completely realistic world to begin with. I would say that the breaking of immersion isn’t mainly constituted by the this isn’t something that could happen in real life-experience, but rather by the this isn’t something that should happen in this game-realisation. Indeed; the problem isn’t reality, but rather the failure of delivering the experience you have implicitly told the player they would have while playing.

When you read a book, the story is key. This story is crafted with words. When you play a game, the key is gameplay. The term is rather vague, and is more or less the equivalent to the term “good” in the field of ethics, i.e. that one word that rests at the base of all other values, yet is itself heavily contested territory. I think the problem with the term gameplay is that people have tried to define it as one thing; e.g. “that feeling you get when you overcome a challenge” or “the joy of playing”. However if we look at the book analogy once more, we can see that its key to success, the concept of a story, is just as vague and ever-changing. Try defining what a good story is and you quickly realise that it all depends on the writer, the reader and the experience that the book tries to deliver. A quirky epos is probably bad, while a jovial rendition of a tragic story can be hard to accept as a reader. Mere examples, of course, but the point stands: different stories want to accomplish different things, just as different games want to deliver different gameplay. Trying to find a common denominator between the action-packed and dynamic gameplay of a game like Crysis and the time-bending and heavily story-based gameplay of much deserved indie-hit Braid is pointless at best, and runs the risk of creating a heavily reductionistic view of games in general at worst.

So let’s get back to our case in point, the graphically challenged lovechild of indie gaming and Farmville; Minecraft. It doesn’t strike me as the type of game that’s aiming to deliver tightly-packed action. Nor does it want you to gape at the amazing sunrise over a field of freshly cut wheat while you practice your spellchanting in the crisp morning air (to be honest, some of the creations in Minecraft are so amazing, I’d be surprised if this very setup doesn’t actually exist… but I hope you can see my point despite this). Minecraft aims to sate the creative (and megalomanic) streak that most humans seem to harbor. It’s more akin to Populous and Theme Park than World of Warcraft. Given enough time, I’m pretty convinced some Minecraft players will raise the level of abstraction and construct advanced games within the game itself.

But this all seems to beg the question: would Minecraft be an even better game if it had all of the things above and stunning visuals without any of the impairments in regards to client computer needs? To be honest I’m really not sure, but the question is also somewhat moot: it’s a fact that you exclude people from playing your game the higher you put your lowest visual bar. Until voxel graphics are big I really can’t see Minecraft needing to change graphical directions for any good reason. Wurm Online, on the other hand, will probably truck on, in essence hampered by it’s too highly set bar in the graphics department. A sprite-based game like (classic) Ultima Online delivers infinitely more immersion in my eyes.

Binary

As I’m sure most people are aware of, computers talk in binary. That is, they only really use 1′s and 0′s to communicate, calculate and let you read the newspaper. Binary is a very primitive and silly way to count, but computers are rather stupid, and only really understand two words: yes and no, which makes binary a good way to communicate with them. Now you might think I’m over-simplifying this just to make a point, but it really is this simple. Computers only communicate with YES and NO. On and off. 1 and 0.

That’s all fine and dandy, you say, but it really doesn’t explain how I am able to play absurdly good-looking games over the Internet! No, it doesn’t. But it’s a good start. So let’s not move on too fast. Like I just mentioned 1 and 0 can be seen as yes and no. But making up big numbers, images etc of just yes and no doesn’t really seem plausible. Which it isn’t. But it also is.

This is an attempt to explain binary in a relatively simple and relaxed way. I know some of you will cringe at the simplifications, but please leave your anal tendencies at home and enjoy the ride instead.

Creating simple numbers with binary ones
So let’s start with numbers. Let’s say I tell the computer to add 9+9. How would I need to tell the computer this using just ones and zeros? Well, the binary system works in such a way that using just two low-value numbers we can create any actual value. This is done by assigning value to the position of the number, and not just the number itself. So that means that the first position in a binary number is worth 1, the second position is worth double this amount (2), the third is worth double the amount before that (4) and so on. Does this sound confusing? Let me illustrate it with a nice little picture:

Binary digits and their values

Looking at the picture, we start at the right and count ourselves to the left. The first position has a value of 1. The second position has a value of 2. The third position has a value of 4. And so on up to the eighth position with a value of 128. Now what the 1′s and 0′s do is decide whether or not the value of its position should be added to the grand total, so if we go from right to left once more and add only the values of the ones, it might be something like:

- Should I add 1 to the total value? YES (1)
- Should I add 2 to the total value? YES (1)
- Should I add 4 to the total value? NO (0)
- Should I add 8 to the total value? YES (1)
- Should I add 16 to the total value? YES (1)
- Should I add 32 to the total value? NO (0)
- Should I add 64 to the total value? NO (0)
- Should I add 128 to the total value? YES (1)

This gives us a total value of 1 + 2 + 8 + 16 + 128 = 155.

Now the key here is to understand that using the 8 yes/no switches illustrated above we can represent any number between 0 and 255. For example 3 is 00000011, which translates to:

- Should I add 1 to the total value? YES (1)
- Should I add 2 to the total value? YES (1)
- Should I add 4-128 to the total value? NO (0)
Total value: 1+2 = 3

So the 0′s and 1′s (commonly known as bits) simply represent this type of very basic instructions to our rather stupid computer, whose only real skill is adding numbers together if we tell it to. These bits are commonly grouped together in pairs of 8 like above. 8 bits is what we call a byte. So when we say that a file or program is a certain size in megabytes, we’re actually talking about how many 1′s and 0′s it requires to represent something to the computer. For example if I save an empty Microsoft Word document and look at it’s file size, it says 24 kilobytes, which roughly means that a simple, empty Word document needs 192 000 (24 * 1000 * 8) YES/NO instructions in order to be represented to our extremely stupid computer.

From numbers to words
Representing numbers with other numbers is one thing, and it should seem to be an at least slightly logical thing to do. But how do we represent letters and words using numbers? Well, it really isn’t all that different. It does however require a second conceptual step – the translation not only from binary numbers (YES/NO) to “normal” numbers (155), but also from these normal numbers to letters (A). This is essentially done using a cipher, just like the time you as a kid agreed with your friends to shift all the letters in the alphabet one step to the right in order to communicate secretly with each other over written notes (i.e. “douchebag” became “epvdifcbh”). The difference here is, of course, that you don’t use one letter to represent another letter, but you use a number, so for example A could be represented by 1, B by 2, C by 3 and so on. That would mean that the word CAB could be written as 312. This is called an encoding, and examples of real encodings used on computers are ASCII and UTF-8. Encodings bridge the gap between letters and their numerical representation:

Encoding

As you can see in my masterfully composed work of art above, the binary representation of “normal” numbers gets sent to the encoding, which then checks what letter to produce based on the number it is given. In this case it is given 321, and following the encoding I made up above, this produces the letters CAB, one letter per number. Different encodings require different numbers to produce characters. For example if you want an encoding that is able to produce any (or most) characters known to man in all the different languages that exist all over the world you will need a big amount of different numbers. If you on the other hand only want to produce A-Z plus numbers and a few punctuation characters, you need a whole lot fewer.

The principle for using binary data to represent something else on a computer is generally the same as above. If you want to represent an image, each pixel will have a numerical value representing the colour values. This numerical value will in turn have a binary value. The same general idea applies to representing sound – an encoding which interprets numbers in a certain way to represent pitch, volume and so on.

A short history (and future) lesson
Back in the day, the first computers were operated using punched cards, which look like this:

Punched card

Here the punched/unpunched hole is a binary representation as well. Like I mentioned earlier, binary doesn’t need to be represented by the numbers 1 and 0. Rather it’s simply something that is either on or off, yes or no. In the card above the punched holes are YES, the unpunched ones are NO. The layout of the card, then, is a way to transform the binary values to something else, in this case numbers by the look of it. Punched cards are now obsolete, but the fact is not much has really changed. The binary representation works exactly the same way – we’re still just telling a stupid machine to either DO or DO NOT. Today computers use electronic signals to represent DO and DO NOT, however in theory we could use anything. Fiber-optic cables use light to transmit the binary signals, and we also have radio waves and so on. The DVD player in your computer bounces laser beams on a plastic disc and detects whether or not there’s a microscopic grove in the area where the laser was shone. There’s even some people who are now using the lamps in their office to transmit wireless data through the faster-than-the-eye flickering of their light. There’s even organic computers being developed, all on the basic principle of binary communication.

And that’s that.