Erik McClure

Does Anyone Actually Want Good Software?


Are there any programmers left that actually care about writing good software? As far as I can tell, the software development industry has turned into a series of echo chambers where managers scream about new features and shipping software and analyzing feedback from customers. Then they ignore all the feedback and implement whatever new things are supposed to be cool, like flat design, or cloud computing, or software as a service.

The entire modern web is built on top of the worst programming language that’s still remotely popular. It’s so awful that IE now supports asm.js just so we can use other languages instead. With everyone relentlessly misquoting “Premature optimization is the root of all evil”, it’s hard to get programmers to optimize any of their code at all, let alone get them to care about things like CPU caches and why allocation on the heap is slow and how memory locality matters.

Some coders exist at large corporations that simply pile on more and more lines of code and force everyone to use gigantic frameworks built on top of more gigantic frameworks built on top of even more gigantic frameworks and then wonder why everything is so slow. Other coders exist in startups that use Scala/Hadoop/Node.js and care only about pumping out features or fixing bugs. The thing is, all of these companies make a lot of money, which leads me to ask, does anyone actually want good software anymore?

Do customers simply not care? Is everyone ok with Skype randomly not sending messages and trying (poorly) to sync all your messages and randomly deciding that certain actions are always unread on other computers and dropping calls and creating all sorts of other strange and bizarre bugs? Is everyone ok with an antivirus that demands you sign in to a buggy window that keeps losing focus every time you try to type in your password? Is everyone ok with Visual Studio deciding it needs to open a text file and taking 15 seconds to actually start up an entirely new instance even though I already have one running just to display the stupid file?

It seems to me that we’re all so obsessed with making cool stuff, we’ve forgotten how to make stuff that actually works.

Did you know that every single person I know (except for two people) hates flat design? They don’t like it. I don’t like it. There’s a bunch of stuckup, narcissistic designers shoving flat design down everyone’s throats and I hate it. The designers don’t care. They insist that it’s elegant and modern and a bunch of other crap that’s all entirely subjective no matter how hard they try to pretend otherwise. Design is about opinions. If I don’t like your design, you can’t just go and say my opinion is wrong. My opinion isn’t wrong, I just don’t agree with you. There’s a difference.

However, it has become increasingly apparent to me that opinions aren’t allowed in programming. I’m not allowed to say that garbage collectors are bad for high performance software. I’m not allowed to say that pure functional programming isn’t some kind of magical holy grail that will solve all your problems. I’m not allowed to say that flat design is stupid. I’m definitely not allowed to say that I hate Python, because apparently Python is a religion.

Because of this, I am beginning to wonder if I am simply delusional. Apparently I’m the only human being left on planet earth who really, really doesn’t like typing magical bullshit into his linux terminal just to get basic things working instead of having a GUI that wasn’t designed by brain-dead monkeys. Apparently, I’m the only one who is entirely willing to pay money for services instead of having awful, ad-infested online versions powered by JavaScript™ and Node.js™ that fall over every week because someone forgot to cycle the drives in a cloud service 5000 miles away. Apparently, no one can fix the audio sample library industry or the fact that most of my VSTi’s manage to use 20% of my CPU when they aren’t actually doing anything.

Am I simply getting old? Has the software industry left me behind? Does anyone else out there care about these things? Should I throw in the towel and call it quits? Is the future of software development writing terrible monstrosities held together by duct tape? Is this the only way to have a sustainable business?

Is this the world our customers want? Because it sure isn’t what I want.

Unfortunately, writing music doesn’t pay very well.


How Not To Install Software


It’s that time of the year again, when everyone and their pony puts on a sale, except now it seems to have started much earlier than the traditional Black Friday. Needless to say, this is the only time of year I go around buying expensive sample libraries. One of these libraries was recommended by a friend: LA Scoring Strings - First Chair 2, a cheaper version of LA Scoring Strings. It’s $100 off, which is pretty nice, except that the page that describes the product doesn’t actually have a link to buy it. You have to click the STORE link, and then buy it from there, because this is a completely obvious and intuitive interface design (it isn’t).

So, after finding the proper link in the Store and verifying I am actually purchasing what I want to purchase, they give me exactly one payment option: Paypal. This is pretty common, so we’ll let it slide, and the whole process seems to go smoothly until they give me my receipt. On the receipt page, they gave me a link to download the files, and a serial number. How helpful! Until I click the download link, which does not open in a new window, and instead opens a completely different webpage with absolutely no way to get back to the page I was just on, because there is no store page with this information and I have no user account on this site. So, I have to go to my e-mail, where they have helpfully e-mailed me a copy of the receipt (probably for this exact reason) to get the serial number.

I then go back to the download page only to discover that I am required to use their stupid download manager in order to download the product I just bought. There is no alternative option whatsoever. So I download their stupid download manager and it magically installs itself somewhere on my computer I will likely never be able to find because it never asked my permission to do anything, and then demands that I log in. Well, obviously, I don’t have a log in, and no one asked me to register until now, so I go to register, which helpfully opens my web browser to register… on a forum. Well, ok, so I register on the forum with a randomly generated password, and activate my account.

So naturally, they then e-mail my password back to me, which by definition means they are storing it in plaintext. So now the password to my account was sent over an unencrypted, entirely open channel, which is insanely stupid, but this is just a sample library, so whatever. I go back to their download manager and put in my credentials and… the login fails. Well, maybe it takes a bit to propagate - no, it just isn’t working. I try again, and triple check that I have the password right. I log out and back into the forum with that very same password, and it still works. It just doesn’t work in the application.

Standard procedure at this point is for me to take every single weird punctuation character out of my password (making it much weaker) to address the possibility that these people are pants on head retarded and can’t handle a password with punctuation in it. I change my password to an alphanumeric one, and lo and behold, I can suddenly log in to the download manager! Let’s think about this for a moment. The password I used had some punctuation characters in it (like “!&#*@(?” etc.), but in order to make sure it was still a valid password, I logged in to the forum with that password, and it succeeded. I then went to this application and put in the same password and it failed to log me in, which means the program actually only accepts some random subset of all valid passwords that the forum lets you register with.

This is laughably bad programming, but my woes aren’t over yet. I click the download button only to get this incredibly helpful message: “Cannot connect to download servers.” Pissed off, I go play a game in the hopes that once I get back, the servers will work again. I close the game only to discover that my download manager is one giant grey screen no matter what i do to it. It’s forgotten how to draw it’s own UI at this point. I restart the program, and it has (of course) helpfully forgotten my login credentials. This time, it displays a EULA it apparently forgot to show me the first time around, and once I accept, clicking install successfully starts downloading the files!

Of course, once the files are installed, they aren’t actually installed installed. I have to go into Kontakt and add the libraries to it’s magical library in order for them to actually get recognized. I can’t tell if this is AudioBro’s fault or Native Instruments fault, but at this point I don’t care, because this has already become the worst installation experience of any piece of software I have had to go through in my entire life.

What’s frightening is that this is par for the course across the desolate wasteland that is Audio Sample Libraries. The entire audio engineering industry employs draconian and ultimately ineffective DRM security measures, often bundled with installers that look like they were written in 1998 and never updated. The entire industry uses software that is grotesquely bloated, digging it’s filthy claws into my operating system and doing all sorts of unspeakable things, and there is no way out.

You can’t disrupt this field, because samples rule everything. If you have good samples, people will buy your shitty sample libraries. EastWest moved from Kontakt (which is a pretty shitty piece of software considering it’s the best sampler in the entire industry) to their own proprietary PLAY engine, which is unstable, bloated, entirely dependent on ASIO4ALL to even work, and prone to crashing. They still make tons of money, because they have the best orchestral samples, which means people will put up with their incredibly bad sampler just so they can use their samples, which are all in a proprietary format that will get you violently sued if you attempt to reverse engineer it.

So, even if you develop the best sampler in the world, it won’t matter, because without samples, your software is dead on arrival. Almost all the samples that are worth having come in proprietary formats that your program can’t understand, and no one can convert these samples to another format (unless they want to reverse engineer the program and get sued, that is). So now the entire sampling industry is locked in a oligopoly of competing samplers that refuse to talk to each other, thus crushing competition by making the cost of entrance so prohibitively high no one can possibly compete with them. And then you get this shit.


Can We Choose What We Enjoy?


One of the most bizarre arguments I have ever heard in ethics is whether or not people can choose to be gay or not. The idea is, if being gay is genetically predetermined, it’s not their fault, therefore you can’t prosecute them for something they have no control over.

Since when did anyone get to choose what makes them happy? Can you choose to like strawberries? Can you choose to enjoy the smell of dandelions? At best, you can subject yourself to something over and over and over again and enjoy it as a sort of acquired taste, but this doesn’t always work, and the fact remains that you are still predisposed to enjoying certain experiences. Unless we make a concentrated effort to change our preferences, all enjoyable sensory experiences occur without our consent. We are not in charge of what combination of neural impulses our brain happens to find enjoyable. All we can do is slowly influence those preferences, and even then, only sometimes.

This concept of people choosing what they enjoy seems to have infected society, and is often at the root of much bizarre and often unfair prosecution. If we assume that people cannot significantly change the preferences they were dealt by life, either as a result of genetic or environmental influences, a host of moral issues become apparent.

Gender roles stop making sense. In fact, prosecuting anyone on the LGTB spectrum immediately becomes invalid. Attacking anyone’s sexual preferences, provided they are harmless, becomes unacceptable. Trying to attack anyone’s artistic or musical preferences becomes difficult, at best. We know for a fact that someone’s culinary preferences are influenced by the genetic distribution of taste buds in their mouth. It’s even hard to properly critique someone’s fashion choices if they happened to despise denim or some other fabric.

As far as I’m concerned, the answer to the question “why would someone like [x]” is always “because their brain is wired in a way that enjoys it.” Humans are, at a fundamental level, sensory processing machines that accidentally achieved self-awareness. We enjoy something because we are programmed to enjoy it. To insult what kinds of sensory input someone enjoys simply because they do not match up with your own is laughably juvenile. The only time this kind of critique is valid is when someone’s preferences cause harm to another person. We all have our own unique ways of processing sensory input, and so we will naturally enjoy different things, through no fault of our own. Sometimes, with a substantial amount of effort, we can slowly change some of those preferences, but most of the time, we’re stuck with whatever we were born with (or whatever environmental factors shaped our perception in our childhood).

Instead of accusing someone of liking something you don’t approve of, maybe next time you should try to understand why they like it, instead. Maybe you’ll find a new friend.


How To Make Your Profiler 10x Faster


Frustrated with C profilers that are either so minimal as to be useless, or giant behemoths that require you to install device drivers, I started writing a lightweight profiler for my utility library. I already had a high precision timer class, so it was just a matter of using a radix trie that didn’t blow up the cache. I was very careful about minimizing the impact the profiler had on the code, even going so far as to check if extended precision floating point calculations were slowing it down.

Of course, since I was writing a profiler, I could use the profiler to profile itself. By pretending to profile a random number added to a cache-murdering int stuck in the middle of an array, I could do a fairly good simulation of profiling a function, while also profiling the act of profiling the function. The difference between the two measurements is how much overhead the profiler has. Unfortunately, my initial results were… unfavorable, to say the least.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 1370173 µs   [##########
  [code]: 545902.7 µs   [##########
  [main.cpp:3866] outer: 5530.022 ns   [....      
    [code]: 3872.883 ns   [...       
    [main.cpp:3868] inner: 1653.139 ns   [.         
  [main.cpp:3856] control: 1661.779 ns   [.         
  [main.cpp:3876] beginend: 1645.466 ns   [.         
The profiler had an overhead of almost 4 microseconds. When you’re dealing with functions that are called thousands of times a second, you need to be aware of code speed on the scale of nanoseconds, and this profiler would completely ruin the code. At first, I thought it was my fault, but none of my tweaks seemed to have any measureable effect on the speed whatsoever. On a whim, I decided to comment out the actual _querytime function that was calling QueryPerformanceCounter, then run an external profiler on it.
Average control: 35 ns
What?! Well no wonder my tweaks weren’t doing anything, all my code was taking a scant 35 nanoseconds to run. The other 99.9% of the time was spent on that single, stupid call, which also happened to be the one call I couldn’t get rid of. However, that isn’t the end of the story; _querytime() looks like this:
void cHighPrecisionTimer::_querytime(unsigned __int64* _pval)
{
  DWORD procmask=_getaffinity(); 
  HANDLE curthread = GetCurrentThread();
  SetThreadAffinityMask(curthread, 1);
  
  QueryPerformanceCounter((LARGE_INTEGER*)_pval);
  
  SetThreadAffinityMask(curthread, procmask);
}

Years ago, it was standard practice to wrap all calls to QueryPerformanceCounter in a CPU core mask to force it to operate on a single core due to potential glitches in the BIOS messing up your calculations. Microsoft itself had recommended it, and you could find this same code in almost any open-source library that was taking measurements. It turns out that this is no longer necessary:

**Do I need to set the thread affinity to a single core to use QPC?**

No. For more info, see Guidance for acquiring time stamps. This scenario is neither necessary nor desirable.

I couldn’t get rid of the QueryPerformanceCounter call itself, but I could get rid of all that other crap it was doing. I commented it out, and voilà! The overhead had been reduced to a scant 340 nanoseconds, only a tenth of what it had been before. I’m still spending 90% of my calculation time calling that stupid function, but there isn’t much I can do about that. Either way, it was a good reminder about the entire reason for using a profiler - bottlenecks tend to crop up in the most unexpected places.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 142416 µs   [##########
  [code]: 56575.4 µs   [##########
  [main.cpp:3866] outer: 515.43 ns   [....      
    [code]: 343.465 ns   [...       
    [main.cpp:3868] inner: 171.965 ns   [.         
  [main.cpp:3876] beginend: 173.025 ns   [.         
  [main.cpp:3856] control: 169.954 ns   [.         

I also tried adding standard deviation measurements, but that ended up giving me ludicrous values of 342±27348 ns, which isn’t very helpful. Apparently there’s quite a lot of variance in function call times, so much so that while the averages always tend to be the same over time, the statistical variance goes through the roof. This is probably why most profilers don’t include the standard deviation. I was able to add in accurate unprofiled code measurements, though, and the profiler uses a dynamic triple magnitude method of displaying how much time a function takes.


The Problem With Photorealism


Many people assume that modern graphics technology is now capable of rendering photorealistic video games. If you define photorealistic as any still frame is indistinguishable from a real photo, then we can get pretty close. Unfortunately, the problem with video games is that they are not still frames - they move.

What people don’t realize is that modern games rely on faking a lot of stuff, and that means they only look photorealistic in a very tight set of circumstances. They rely on you not paying close attention to environmental details so you don’t notice that the grass is actually just painted on to the terrain. They precompute environmental convolution maps and bake ambient occlusion and radiance information into level architecture. You can’t knock down a building in a game unless it is specifically programmed to be breakable and all the necessary preparations are made. Changes in levels are often scripted, with complex physical changes and graphical consequences being largely precomputed and simply triggered at the appropriate time.

Modern photorealism, like the 3D graphics of ages past, is smoke and mirrors, the result of very talented programmers and artists using tricks of the eye to convince you that a level is much more detailed and interactive than it really is. There’s nothing wrong with this, but we’re so good at doing it that people think we’re a heck of a lot closer to photorealistic games then we really are.

If you want to go beyond simple photorealism and build a game that feels real, you have to deal with a lot of extremely difficult problems. Our best antialiasing methods are perceptual, because doing real antialiasing is prohibitively expensive. Global illumination is achieved by deconstructing a level’s polygons into an octree and using the GPU to cubify moving objects in realtime. Many advanced graphical techniques in use today depend on precomputed values and static geometry. The assumption that most of the world is probably going to stay the same is a powerful one, and enables huge amounts of optimization. Unfortunately, as long as we make that assumption, none of it will ever feel truly real.

Trying to build a world that does not take anything for granted rapidly spirals out of control. Where do you draw the line? Does gravity always point down? Does the atmosphere always behave the same way? Is the sun always yellow? What counts as solid ground? What happens when you blow it up? Is the object you’re standing on even a planet? Imagine trying to code an engine that can take into account all of these possibilities in realtime. This is clearly horrendously inefficient, and yet there is no other way to achieve a true dynamic environment. At some point, we will have to make assumptions about what will and will not change, and these sometimes have surprising consequences. A volcanic eruption, for example, drastically changes the atmospheric composition and completely messes up the ambient lighting and radiosity.

Ok, well, at least we have dynamic animations, right? Wrong. Almost all modern games still use precomputed animations. Some fancy technology can occasionally try to interpolate between them, but that’s about it. We have no reliable method of generating animations on the fly that don’t look horrendously awkward and stiff. It turns out that trying to calculate a limb’s shortest path from point A to point B while avoiding awkward positions and obstacles amounts to solving the Euler-Lagrange equation over an n-dimensional manifold! As a result, it’s incredibly difficult to create smooth animations, because our ability to fluidly shift from one animation to another is extremely limited. This is why we still have weird looking walk animations and occasional animation jumping.

The worst problem, however, is that of content creation. The simple fact is that at photorealistic detail levels, it takes way too long for a team of artists to build a believable world. Even if we had super amazing 3D modelers that would allow an artist to craft any small object in a matter of minutes (which we don’t), artists aren’t machines. Things look real because they have a history behind them, a reason for their current state of being. We can make photorealistic CGI for movies because each scene is scripted and has a well-defined scope. If you’re building GTA V, you can’t somehow manage to come up with three hundred unique histories for every single suburban house you’re building.

Even if we did invent a way to render photorealistic graphics, it would all be for naught until we figured out a way to generate obscene amounts of content at incredibly high levels of detail. Older games weren’t just easier to render, they were easier to make. There comes a point where no matter how many artists you hire, you simply can’t build an expansive game world at a photorealistic level of detail in just 3 years.

People always talk about realtime raytracing as the holy grail of graphics programming without realizing just what is required to take advantage of it. Photorealism isn’t just about processing power, it’s about content.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009