Erik McClure

Software Engineering Is Bad, But That's Not Why


I’ve been writing code for over 12 years, and for a while I’ve been disgusted by the sorry state of programming. That’s why I felt a deep kinship with Nikita Prokopov’s article, Software Disenchantment. It captures the intense feeling of frustration I have with the software industry as a whole and the inability of modern programmers to write anything even remotely resembling efficient systems.

Unfortunately, much of it is wrong.

While I wholeheartedly agree with what Nikita is saying, I am afraid that he doesn’t seem to understand the details behind modern computers. This is unfortunate, because a misinformed article like this weakens both our positions and makes it more difficult to convince people that software bloat is a problem. Most of the time, his frustrations are valid, but his reasoning is misdirected. So, in this article, I’m going to write counterpoints to some of the more problematic claims.

1. Smooth Scroll

One of my hobbies is game development, and I can assure you that doing anything at 4K resolution at 60 FPS on a laptop is insanely hard. Most games struggle to render at 4K 60 FPS with powerful GPUs, and 2D games are usually graphically simplistic, in that they can have lots of fancy drawings, but drawing 200 fancy images on the screen with hardly any blending is not very difficult. A video is just a single 4K image rendered 60 times a second (plus interpolation), which is trivial. Web renderers can’t do that, because HTML has extremely specific composition rules that will break a naïve graphics pipeline. There is also another crucial difference between a 4K video, a video game, and a webpage: text.

High quality anti-aliased and sometimes sub-pixel hinted text at 10 different sizes on different colored backgrounds blended with different transparencies is just really goddamn hard to render. Games don’t do any of that. They have simple UIs with one or two fonts that are often either pre-rendered or use signed distance fields to approximate them. A web browser is rendering arbitrary unicode text that could include emojis and god knows what else. Sometimes it’s even doing transformation operations in realtime on an SVG vector image. This is hard, and getting it to run on a GPU is even harder. One of the most impressive pieces of modern web technology is Firefox’s WebRender, which actually pushes the entire composition pipeline to the GPU, allowing it to serve most webpages at a smooth 60 FPS. This is basically as fast as you can possibly get, which is why this is a particularly strange complaint.

I think the real issue here, and perhaps what Nikita was getting at, is that the design of modern webpages is so bloated that the web browsers can’t keep up. They’re getting inundated with <div> trees the size of Mount Everest and 10 megs worth of useless javascript bootstrapping ad campaigns that load entire miniature videos. However, none of this has anything to do with the resolution or refresh rate of your screen. Useless crap is going to be slow no matter what your GPU is. Inbox taking 13 seconds to load anything is completely unacceptable, but animating anything other than a white box in HTML is far more expensive than you think, and totally unrelated.

2. Latency

Latency is one of the least understood values in computer science. It is true that many text editors have abysmal response times caused by terrible code, but it’s a lot easier to screw this up than people realize. While CPUs have gotten faster, the latency between hardware components hasn’t improved at all, and in many cases cannot possibly improve. This is because latency is dominated by physical separation and connective material. The speed of light hasn’t changed in the past 48 years, so why would the minimum latency?

The problem is that you can’t put anything on top of a system without increasing its latency, and you can’t decrease the latency unless you bypass a system. That 42-year-old emacs system was basically operating at its theoretical maximum because there was barely anything between the terminal and the keyboard. It is simply physically impossible to make that system more responsive, no matter how fast the CPU gets. Saying it’s surprising that a modern text editor is somehow slower than a system operating at the minimum possible latency makes absolutely no sense, because the more things you put between the keyboard and the screen, the higher your latency will be. This has literally nothing to do with how fast your GPU is. Nothing.

It’s actually much worse, because old computers didn’t have to worry about silly things like composition. They’d do v-sync themselves, manually, drawing the cursor or text in between vertical blanks of the monitor. Modern graphics draw to a separate buffer, which is then flipped to the screen on it’s next refresh. The consequence, however, is that drawing a new frame right after a vertical blank ignores all the input you got that frame! You can only start drawing after you’ve processed all the user input, so once you start, it’s game over. This means that if a vertical blank happens every 16.6 ms, and you start drawing at the beginning of that frame, you have to wait 16.6 ms to process the user input, then draw the next frame and wait another 16.6 ms for the new buffer to get flipped to the screen!

That’s 33ms of latency right there, and that’s if you don’t screw anything up. A single badly handled async call could easily introduce another frame of lag. As modern hardware connections get more complex, they introduce more latency. Wireless systems introduce even more latency. Hardware abstraction layers, badly written drivers, and even the motherboard BIOS can all negatively impact the latency and we haven’t even gotten to the application yet. Again, the only way to lower latency is to bypass layers that add latency. At best, perfectly written software would add negligible latency and approach the latency of your emacs terminal, but could never surpass it (unless we start using graphene).

We should all be pushing for low-latency systems, but electron apps are simply the worst offender. This is something everyone, from the hardware to the OS to the libraries, has to cooperate on if we want responsive computers.

3. Features

It seems silly to argue that computers today have no new features. Of course they have new features. A lot of the features are ones I don’t use, but they do get new features and occasionally they are actually nice. I think the real problem here that each new feature, for some inexplicable reason, requires exponentially more resources than the feature before it, often for no apparent reason. Other times, basic features that are trivial to implement are left out, also for no apparent reason.

For example, Discord still doesn’t know how to de-duplicate resent client messages over a spotty connection despite this being a solved problem for decades, and if a message is deleted too quickly, the deletion packet is received before the message itself, and the client just… never deletes it. This could be trivially solved with a tombstone or even just a temporary queue of unmatched deletion messages, yet the client instead creates a ghost message that you can’t get rid of until you restart the client. There is absolutely no reason for this feature to not exist. It’s not even bloat, it’s just ridiculous.

However, a few other comparisons in here really don’t make any sense. For example, an installation of Windows 10 is only 4 GB because of extreme amounts of compression, yet the article compares this with a 6 GB uncompressed basic install of android. Windows 10 is actually 16 GB once it’s actually installed (20 GB for 64-bit). While software bloat is a very real problem, these kinds of comparisons are just nonsense.

4. Compilers

Now, this one I really don’t understand. Any language other than C++ or Rust basically compiles instantly until you hit 100k lines of code. At work, we have a C# monstrosity that’s half a million lines of code and compiles in 20 seconds. That’s pretty fast. Most other languages are JIT-compiled, so you can just run them instantly. Even then, you don’t really want to optimize for compile time on Release mode unless you’re just removing unnecessary bloat, and many modern compilers take a long time to compile things because they’re doing ridiculously complex optimizations that may require solving NP-hard optimization problems, which some actually do.

The original C language and Jonathon Blow’s language compile really fast because they don’t do anything. They don’t help you, they have an incredibly basic type system, they don’t do advanced memory analysis or a bunch of absurd optimizations to take advantage of the labyrinthine x86-64 instruction set. Languages in the 1990s compiled instantly because they had no features. Heck, sometimes compilation is actually disk bound, which is why getting an SSD can dramatically improve compile times for large projects. This has nothing to do with the compiler!

My question here is what on earth are you compiling that takes hours to compile?! The only projects I’m aware of that take this long are all C++ projects, and it’s always because of header files, because header files are terrible and thankfully no other language ever made that mistake ever again. I am admittedly disappointed in Rust’s compilation times, but most other languages try to ensure that at least debug compilation is ridiculously fast.

I think the complaints here are mostly due to bloated javascript ecosystems that pile NPM modules on top of each other until even a simple linter takes forever to run, or when coders write their entire program in a completely different language that transpiles to javascript and then minify the javascript and that’s if you don’t put it through Babel to polyfill back to earlier versions of javascript and… this sure seems like a javascript problem to me, not a general issue with modern compilers.

Hopefully, webassembly will eliminate this problem, at least on the web. As for everywhere else, unless you’re using a complex systems programming language, compilation times usually aren’t that bad, and even when they are, they exist for a reason.

Why would you complain about memory usage and then in the next breath complain about compilation times? Language features are not free. Rust’s lifetime analysis is incredibly difficult to do, but frees the programmer from worrying about memory errors without falling back to a stop-the-world garbage collector, which is important when garbage collection uses 2-3 times more memory than it actually needs (depending on how much performance you want).

Efficient code, fast compilation and memory safety. Pick two. If you feel that compilers can simply magically improve everything, you’ve probably been living in javascript land for too long, where everything is horrible all the time for no reason. The rest of computing is not actually that horrible. The real problem is that most things are now being written in javascript, so everyone inherits all of javascript’s problems. If you don’t want javascript’s problems, stop using it.

Conclusion

I care very deeply about the quality of code that our industry is putting out, and I love the Better World Manifesto that Nikita has proposed. However, it is painful for me to read an article criticizing bad engineering that gets the technical details wrong. A good engineer makes sure he understands what he’s doing, that’s the whole point of the article! If we truly want to be better engineers, we need to understand the problems we face and what we can do to fix them. We need to understand how our computers work and fundamental algorithmic trade-offs we make when we compare problem-solving approaches.

Years ago, I wrote my own article on this, and I feel it is still relevant today. I asked if anyone actually wants good software. At least now, I know some people do. Maybe together, we can do something about it.


Why Do People Use The Wrong Email?


Ever since 2013, I’ve consistently started getting registration e-mails in foreign languages from sites I definitely did not sign up for.

It started with Instagram, on which a bizarrely determined young boy from somewhere around Denmark was trying to register using my e-mail address. Instagram lets you remove an e-mail from an account, which is what I did, repeatedly, but the kid kept adding the non-functional e-mail back on to the account. Eventually I forced a password reset and forcibly deleted his account, in an attempt to dissuade him from using someone else’s e-mail in the future. Astonishingly, this did not work, and I was forced to register on Instagram just to prevent my e-mail from being used.

He shared a first name with me, and I noticed his name on a few of the other e-mails I had gotten. At first, I thought it was just this one kid, possibly related to the infamous gmail dot issue, but astoundingly, most of the time the e-mail had no dots and no apparent typos, it was just… my e-mail. Then I started getting even weirder e-mails.

  • Someone else near Denmark used my e-mail to open an Apple ID. I went in to disable the account and it included payment information and their home address, along with the ability to remotely disable their apple device.
  • I once got a Domino’s order receipt from someone on Rhode Island, which included their full name, home address, and phone number.
  • Just recently, someone signed up for Netflix, got the account temporarily suspended for lack of payment, and then added a payment option before I decided to go in and change the e-mail while also signing up for Netflix so I wouldn’t have to deal with that anymore. I could see part of the credit card payment option they had used.
  • Another time, I woke up to someone in a european timezone creating an account on Animoto and then uploading 3 videos to it before I could reset the password and lock out the account.
  • At least two sites included a plaintext password in the e-mail, although they didn’t seem very legitimate in the first place.

What’s really frightening is discovering just how fragile many of these websites are. Most of them that allow you to change your e-mail address don’t require the new e-mail to be verified, allowing me to simply change it to random nonsense and render the account permanently inaccessible. Others allow your account to function without any sort of e-mail verification whatsoever.

One of my theories was that people just assumed they were picking a username that happened to have @gmail.com on the end of it. My e-mail is my first name and a number, which probably isn’t hard for someone also named Erik to accidentally choose. However, some of these e-mails are for people clearly not named Erik, so where is the e-mail coming from? Why use it?

So far, I’ve had my e-mail used incorrectly to sign up for these services:

  • Netflix (Spanish) - Cely A.
  • PlayView (Spanish)
  • Mojang (English)
  • Apple ID (Danish) - Seier Madsen
  • Telekom Fon (Hungarian)
  • Nutaku (English) - Wyled1
  • Samsung (Spanish)
  • Forex Club (Russian) - Eric
  • Marvel Contest of Champions (Portuguese)
  • Jófogás (Hungarian)
  • Wargaming.net (Russian)
  • Deezer (English) - Erik Morales
  • Crossfire (Portuguese)
  • Instagram (Danish) - Erikhartsfield
  • List.am (Armenian)
  • ROBLOX (English) - PurpleErik18
  • cccraft.net (Hungarian)
  • ThesimpleClub (German)
  • Cadastro Dabam (Portuguese)
  • Első Találkozás (Hungarian) - Rosinec
  • Pinterest (Portuguese) - Erik
  • MEGA (Spanish)
  • mestermc.hu (Hungarian) - Rosivagyok
  • Snapchat (English)
  • Skype (Swedish)
  • PlayIT (Hungarian) - hírlevél
  • Animoto (English) - Erik
  • Geometry Dash (English) - erikivan1235
  • Club Penguin (Spanish)
  • LEGO ID (English) - szar3000
  • Seejaykay.com (English)
  • Dragon’s Prophet (English)
  • Sweepstakes (English) - ErikHartsfield
  • School.of.Nursing (English) - ErikHartsfield
  • SendEarnings (English) - ErikHartsfield
  • Talkatone (English) - Cortez
  • Anonymous VPN (English)
  • Penge (Hungarian)
  • Apple ID (Swedish) - Erik
  • Snapchat (Swedish) - kirenzo
  • Snapchat (Swedish) - erik20039
  • ROBLOX (English) - Mattias10036
  • Riot Games (English) - epik991122
  • Instagram (English) - opgerikdontcare
  • Goodgame Empire (English) - rulererikman

Given how fundamental e-mail is to our modern society, it’s disconcerting that some people, especially young kids, have no idea how powerful an e-mail is. When they provide the wrong e-mail for a service, they are handing over the master keys to their account. These services use e-mail as a primary source of identification, and some of them don’t even seem to realize they’re using the wrong e-mail.

Perhaps this speaks to the fact that, despite all the work large software corporations claim they put into making intuitive user interfaces, basic aspects of our digital world are still arcane and confusing to some people. Forget trying to replace passwords with biometrics, some people don’t even understand how e-mail works. Maybe the software industry needs to find a more intuitive way to assert someone’s identity.

Or maybe people are just dumb.


Software Optimizes to Single Points of Failure


Whenever people talk about removing single points of failure, most of the suggestions involve “distributed systems” that are resilient to hardware failures. For software, we’ve invented code signing and smart contracts via blockchain to ensure the code we’re running is what we expected to run.

But none of these technologies can prevent a bug from taking down the entire system.

A lot of people point to Google being a single point of failure. They are only partially correct, because Google’s hardware is distributed and extremely redundant. No single piece of hardware in a Google Data center failing can take down the entire data center. You could probably nuke the entire data center and most Google services could fall back to another data center. In fact, Google has developed software canaries to catch bugs from propagating too far into production in an attempt to address the problem of their software being a single point of failure.

But something did take down the entirety of Google Compute once. It was a software bug in the canary itself. Of course, all the canaries were running the same software, so all of them had the same bug, and none of them could catch the configuration bug that was being propagated to all of their routers.

By creating a software canary, Google had simply shifted the single point of failure to its canary software. It was much harder for it to fail, but it was still a single point of failure, so when it did fail, it took down the entire system.

We’ve put a lot of work into trying to reduce the amount of bugs in mission critical systems, going so far as to try to create provably correct software. The problem is that no system can prove that it is free of design-flaws, which occur when the software operates correctly, but does something nobody actually wanted it to do. All of our code-signing and trusted computing initiatives do is make it very difficult for someone to sneak bad code into a widely used library. None of them, however, remove the single point of failure. Should the NSA ever succeed in sneaking in a backdoor to a widely used open source library, it will propagate to everything.

A very well guarded single point of failure is still a single point of failure, no matter how remote the chances of it actually failing. Tom Scott has an excellent video about how a trusted engineer at Google that is allowed to bypass all their security checks could go rogue and remove all the password checks on everything and it would be incredibly hard to stop them.

Physical infrastructure is much more resilient to these kinds of problems, because even if every piece of infrastructure has the same problem, you still have to physically get to it in order to exploit the problem. This makes it very hard for anyone to simultaneously sabotage any country’s offline infrastructure without an incredible amount of work. Software, however, lets us access everything from everywhere. The internet removes physical access as a last resort.

Of course, this is not an insurmountable problem, but it is deceptively difficult to overcome. For example, let’s say we have a bunch of drones we’re controlling. To avoid one bug from taking all of them out at once, half of them run one flying program and the other run a completely different flying program, developed independently. Unfortunately, both of these programs rely on the same library that reads the gyroscope data. If that library has a bug, the entire swarm will crash into a mountain. Having the swarm calculate the data for each other and compare results doesn’t help, because everyone gets the wrong result. The software logic itself is wrong.

The reason this is so insidious is that it runs counter to sane software development practices. To minimize bugs, we minimize complexity, which means writing the least amount of code possible, which inadvertently optimizes to a single point of failure. We re-use libraries and share code. We deliberately try to solve problems exactly once and re-use this code everywhere in our program. This a good thing because it means any bugs we fix propagate everywhere else, but this comes at the cost of propagating any bugs we introduce.

Soon, our world will be consumed by automation, one way or another. Cory Doctorow suggests that hardware should only run software the user trusts, but what if I end up trusting buggy software? If all our self-driving cars run the same software, what happens when it has a bug? Even worse, what if all the different self-driving car companies have their own software, custom built by highly paid engineers… that all use OpenSSL to securely download updates?

What if OpenSSL has a bug?

It’s not clear what can be done about this. Obviously we shouldn’t go around introducing unnecessary complexity that creates even more bugs, but at the same time we shouldn’t delude ourselves into thinking our distributed systems have no single point of failure. They may be robust to hardware failures, but the software they run on will continue to be a single point of failure for the foreseeable future.


Migrating To A Static Blog


I’ve finished constructing a new personal website for myself using hugo, and I’m moving my blog over there so I have more control over what gets loaded, and more importantly, so the page doesn’t attempt to load Blogger’s 5 MB worth of bloated javascript nonsense just to read some text. It also fixes math and code highlighting while reading on mobile. If you reached this post using Blogger, you’ll be redirected or will soon be redirected to the corresponding post on my new website.

All comments have been preserved from the original posts, but making new comments is currently disabled - I haven’t decided if I want to use Disqus or attempt something else. An RSS feed is available on the bottom of the page for tracking new posts that should mimic the Blogger RSS feed, if you were using that. If something doesn’t work, poke me on twitter and I’ll try to fix it.

I implemented share buttons with simple links, without embedding any crazy javascript bullshit. In fact, the only external resource loaded is a Google tracking ID for pageviews. Cloudflare is used to enforce an HTTPS connection over the custom domain even though the website is hosted on Github Pages.

Hopefully, the new font and layout is easier to read than Blogger’s tiny text and bullshit theme nonsense.


How To Avoid Memorizing Times Tables


I was recently told that my niece was trying to memorize her times tables. As an applied mathematician whose coding involves plenty of multiplication, I was not happy to hear this. Nobody who does math actually memorizes times tables, and furthermore, forcing a child to memorize anything is probably the worst possible thing you can do in modern society. No one should memorize their times tables, they should learn how to calculate them. Forcing children to memorize useless equations for no reason is a great way to either ensure they hate math, teach them they should blindly memorize and believe anything adults tell them, or both. So for any parents who wish to teach their children how to be critical thinkers and give them an advantage on their next math test, I am going to describe how to derive the entire times tables with only 12 rules.

  1. Anything multiplied by 1 is itself. Note that I said anything, that includes fractions, pies, cars, the moon, or anything else you can think of. Multiplying it by 1 just gives you back the same result.

  2. Any number multiplied by 10 has a zero added on the end. 1 becomes 10, 2 becomes 20, 72 becomes 720, 9999 becomes 99990, etc.

  3. Any single digit multiplied by 11 simply adds itself on the end instead of 0. 1 becomes 11, 2 becomes 22, 5 becomes 55, etc. This is because you never need to multiply something by eleven. Instead, multiply it by 10 (add a zero to it) then add itself.

    \[ \begin{aligned} 11*11 = 11*(10 + 1) = 11*10 + 11 = 110 + 11 = 121\\ 12*11 = 12*(10 + 1) = 12*10 + 12 = 120 + 12 = 132 \end{aligned} \]

  4. You can always reverse the numbers being multiplied and the same result comes out. $$ 12*2 = 2*12 $$, $$ 8*7 = 7*8 $$, etc. This is a simple rule, but it’s very easy to forget, so keep it in mind.

  5. Anything multiplied by 2 is doubled, or added to itself, but you only need to do this up to 9. For example, $$ 4*2 = 4 + 4 = 8 $$. Alternatively, you can count up by 2 that many times:

    \[ 4*2 = 2 + 2 + 2 + 2 = 4 + 2 + 2 = 6 + 2 = 8 \]
    To multiply any large number by two, double each individual digit and carry the result. Because you multiply each digit by 2 separately, the highest result you can get from this is 18, so you will only ever carry a 1, just like in addition.
    \[ \begin{aligned} \begin{matrix} 3 & 6\\ & 2\\ \hline & \\ & \\ \hline & \end{matrix}\quad \begin{matrix} 3 & 6\\ & 2\\ \hline 1 & 2\\ & \\ \hline & \end{matrix}\quad \begin{matrix} 3 & 6\\ & 2\\ \hline 1 & 2\\ 6 & \\ \hline & \end{matrix}\quad \begin{matrix} 3 & 6\\ & 2\\ \hline 1 & 2\\ 6 & \\ \hline 7 & 2 \end{matrix} \end{aligned} \]
    This method is why multiplying anything by 2 is one of the easiest operations in math, and as a result the rest of our times table rules are going to rely heavily on it. Don’t worry about memorizing these results - you’ll memorize them whether you want to or not simply because of how often you use them.

  6. Any number multiplied by 3 is multiplied by 2 and then added to itself. For example:

    \[ 6*3 = 6*(2 + 1) = 6*2 + 6 = 12 + 6 = 18 \]
    Alternatively, you can add the number to itself 3 times: $$ 3*3 = 3 + 3 + 3 = 6 + 3 = 9 $$

  7. Any number multiplied by 4 is simply multiplied by 2 twice. For example: $$ 7*4 = 7*2*2 = 14*2 = 28 $$

  8. Any number multiplied by 5 is the same number multiplied by 4 and then added to itself.

    \[ 6*5 = 6*(4 + 1) = 6*4 + 6 = 6*2*2 + 6 = 12*2 + 6 = 24 + 6 = 30 \]
    Note that I used our rule for 4 here to break it up and calculate it using only 2. Once kids learn division, they will notice that it is often easier to calculate 5 by multiplying by 10 and halving the result, but we assume no knowledge of division.

  9. Any number multiplied by 8 is multiplied by 4 and then by 2, which means it’s actually just multiplied by 2 three times. For example: $$ 7*8 = 7*4*2 = 7*2*2*2 = 14*2*2 = 28*2 = 56 $$

  10. Never multiply anything by 12. Instead, multiply it by 10, then add itself multiplied by 2. For example: $$ 12*12 = 12*(10 + 2) = 12*10 + 12*2 = 120 + 24 = 144 $$

  11. Multiplying any single digit number by 9 results in a number whose digits always add up to nine, and whose digits decrease in the right column while increasing in the left column.

    \[ \begin{aligned} 9 * 1 = 09\\ 9 * 2 = 18\\ 9 * 3 = 27\\ 9 * 4 = 36\\ 9 * 5 = 45\\ 9 * 6 = 54\\ 9 * 7 = 63\\ 9 * 8 = 72\\ 9 * 9 = 81 \end{aligned} \]
    10, 11, and 12 can be calculated using rules for those numbers.

  12. For both 6 and 7, we already have rules for all the other numbers, so you just need to memorize 3 results:

    \[ \begin{aligned} 6*6 = 36\\ 6*7 = 42\\ 7*7 = 49 \end{aligned} \]
    Note that $$ 7*6 = 6*7 = 42 $$. This is where people often forget about being able to reverse the numbers. Every single other multiplication involving 7 or 6 can be calculated using a rule for another number.

And there you have it. Instead of trying to memorize a bunch of numbers, kids can learn rules that build on top of each other, each taking advantage of the rules established before it. It’s much more engaging then trying to memorize a giant table of meaningless numbers, a task that’s so mind-numbingly boring I can’t imagine forcing an adult to do it, let alone a small child. More importantly, this task teaches you what math is really about. It’s not about numbers, or adding things together, or memorizing a bunch of formulas. It’s establishing simple rules, and then combining those rules together into more complex rules you can use to solve more complex problems.

This also establishes a fundamental connection to computer science that is often glossed over. Both math and programming are repeated abstraction and generalization. It’s about combining simple rules into a more generalized rule, which can then be abstracted into a simpler form and combined to create even more complex rules. Programs start with machine instructions, while math starts with propositions. Programs have functions, and math has theorems. Both build on top of previous results to create more powerful and expressive tools. Both require a spark of creativity to recognize similarities between seemingly unrelated concepts and unite them in a more generalized framework.

We can demonstrate all of this simply by refusing to memorize our times tables.


Avatar

Archive

  1. 2024
  2. 2023
  3. 2022
  4. 2021
  5. 2020
  6. 2019
  7. 2018
  8. 2017
  9. 2016
  10. 2015
  11. 2014
  12. 2013
  13. 2012
  14. 2011
  15. 2010
  16. 2009