Erik McClure

How Not To Install Software


It’s that time of the year again, when everyone and their pony puts on a sale, except now it seems to have started much earlier than the traditional Black Friday. Needless to say, this is the only time of year I go around buying expensive sample libraries. One of these libraries was recommended by a friend: LA Scoring Strings - First Chair 2, a cheaper version of LA Scoring Strings. It’s $100 off, which is pretty nice, except that the page that describes the product doesn’t actually have a link to buy it. You have to click the STORE link, and then buy it from there, because this is a completely obvious and intuitive interface design (it isn’t).

So, after finding the proper link in the Store and verifying I am actually purchasing what I want to purchase, they give me exactly one payment option: Paypal. This is pretty common, so we’ll let it slide, and the whole process seems to go smoothly until they give me my receipt. On the receipt page, they gave me a link to download the files, and a serial number. How helpful! Until I click the download link, which does not open in a new window, and instead opens a completely different webpage with absolutely no way to get back to the page I was just on, because there is no store page with this information and I have no user account on this site. So, I have to go to my e-mail, where they have helpfully e-mailed me a copy of the receipt (probably for this exact reason) to get the serial number.

I then go back to the download page only to discover that I am required to use their stupid download manager in order to download the product I just bought. There is no alternative option whatsoever. So I download their stupid download manager and it magically installs itself somewhere on my computer I will likely never be able to find because it never asked my permission to do anything, and then demands that I log in. Well, obviously, I don’t have a log in, and no one asked me to register until now, so I go to register, which helpfully opens my web browser to register… on a forum. Well, ok, so I register on the forum with a randomly generated password, and activate my account.

So naturally, they then e-mail my password back to me, which by definition means they are storing it in plaintext. So now the password to my account was sent over an unencrypted, entirely open channel, which is insanely stupid, but this is just a sample library, so whatever. I go back to their download manager and put in my credentials and… the login fails. Well, maybe it takes a bit to propagate - no, it just isn’t working. I try again, and triple check that I have the password right. I log out and back into the forum with that very same password, and it still works. It just doesn’t work in the application.

Standard procedure at this point is for me to take every single weird punctuation character out of my password (making it much weaker) to address the possibility that these people are pants on head retarded and can’t handle a password with punctuation in it. I change my password to an alphanumeric one, and lo and behold, I can suddenly log in to the download manager! Let’s think about this for a moment. The password I used had some punctuation characters in it (like “!&#*@(?” etc.), but in order to make sure it was still a valid password, I logged in to the forum with that password, and it succeeded. I then went to this application and put in the same password and it failed to log me in, which means the program actually only accepts some random subset of all valid passwords that the forum lets you register with.

This is laughably bad programming, but my woes aren’t over yet. I click the download button only to get this incredibly helpful message: “Cannot connect to download servers.” Pissed off, I go play a game in the hopes that once I get back, the servers will work again. I close the game only to discover that my download manager is one giant grey screen no matter what i do to it. It’s forgotten how to draw it’s own UI at this point. I restart the program, and it has (of course) helpfully forgotten my login credentials. This time, it displays a EULA it apparently forgot to show me the first time around, and once I accept, clicking install successfully starts downloading the files!

Of course, once the files are installed, they aren’t actually installed installed. I have to go into Kontakt and add the libraries to it’s magical library in order for them to actually get recognized. I can’t tell if this is AudioBro’s fault or Native Instruments fault, but at this point I don’t care, because this has already become the worst installation experience of any piece of software I have had to go through in my entire life.

What’s frightening is that this is par for the course across the desolate wasteland that is Audio Sample Libraries. The entire audio engineering industry employs draconian and ultimately ineffective DRM security measures, often bundled with installers that look like they were written in 1998 and never updated. The entire industry uses software that is grotesquely bloated, digging it’s filthy claws into my operating system and doing all sorts of unspeakable things, and there is no way out.

You can’t disrupt this field, because samples rule everything. If you have good samples, people will buy your shitty sample libraries. EastWest moved from Kontakt (which is a pretty shitty piece of software considering it’s the best sampler in the entire industry) to their own proprietary PLAY engine, which is unstable, bloated, entirely dependent on ASIO4ALL to even work, and prone to crashing. They still make tons of money, because they have the best orchestral samples, which means people will put up with their incredibly bad sampler just so they can use their samples, which are all in a proprietary format that will get you violently sued if you attempt to reverse engineer it.

So, even if you develop the best sampler in the world, it won’t matter, because without samples, your software is dead on arrival. Almost all the samples that are worth having come in proprietary formats that your program can’t understand, and no one can convert these samples to another format (unless they want to reverse engineer the program and get sued, that is). So now the entire sampling industry is locked in a oligopoly of competing samplers that refuse to talk to each other, thus crushing competition by making the cost of entrance so prohibitively high no one can possibly compete with them. And then you get this shit.


Can We Choose What We Enjoy?


One of the most bizarre arguments I have ever heard in ethics is whether or not people can choose to be gay or not. The idea is, if being gay is genetically predetermined, it’s not their fault, therefore you can’t prosecute them for something they have no control over.

Since when did anyone get to choose what makes them happy? Can you choose to like strawberries? Can you choose to enjoy the smell of dandelions? At best, you can subject yourself to something over and over and over again and enjoy it as a sort of acquired taste, but this doesn’t always work, and the fact remains that you are still predisposed to enjoying certain experiences. Unless we make a concentrated effort to change our preferences, all enjoyable sensory experiences occur without our consent. We are not in charge of what combination of neural impulses our brain happens to find enjoyable. All we can do is slowly influence those preferences, and even then, only sometimes.

This concept of people choosing what they enjoy seems to have infected society, and is often at the root of much bizarre and often unfair prosecution. If we assume that people cannot significantly change the preferences they were dealt by life, either as a result of genetic or environmental influences, a host of moral issues become apparent.

Gender roles stop making sense. In fact, prosecuting anyone on the LGTB spectrum immediately becomes invalid. Attacking anyone’s sexual preferences, provided they are harmless, becomes unacceptable. Trying to attack anyone’s artistic or musical preferences becomes difficult, at best. We know for a fact that someone’s culinary preferences are influenced by the genetic distribution of taste buds in their mouth. It’s even hard to properly critique someone’s fashion choices if they happened to despise denim or some other fabric.

As far as I’m concerned, the answer to the question “why would someone like [x]” is always “because their brain is wired in a way that enjoys it.” Humans are, at a fundamental level, sensory processing machines that accidentally achieved self-awareness. We enjoy something because we are programmed to enjoy it. To insult what kinds of sensory input someone enjoys simply because they do not match up with your own is laughably juvenile. The only time this kind of critique is valid is when someone’s preferences cause harm to another person. We all have our own unique ways of processing sensory input, and so we will naturally enjoy different things, through no fault of our own. Sometimes, with a substantial amount of effort, we can slowly change some of those preferences, but most of the time, we’re stuck with whatever we were born with (or whatever environmental factors shaped our perception in our childhood).

Instead of accusing someone of liking something you don’t approve of, maybe next time you should try to understand why they like it, instead. Maybe you’ll find a new friend.


How To Make Your Profiler 10x Faster


Frustrated with C profilers that are either so minimal as to be useless, or giant behemoths that require you to install device drivers, I started writing a lightweight profiler for my utility library. I already had a high precision timer class, so it was just a matter of using a radix trie that didn’t blow up the cache. I was very careful about minimizing the impact the profiler had on the code, even going so far as to check if extended precision floating point calculations were slowing it down.

Of course, since I was writing a profiler, I could use the profiler to profile itself. By pretending to profile a random number added to a cache-murdering int stuck in the middle of an array, I could do a fairly good simulation of profiling a function, while also profiling the act of profiling the function. The difference between the two measurements is how much overhead the profiler has. Unfortunately, my initial results were… unfavorable, to say the least.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 1370173 µs   [##########
  [code]: 545902.7 µs   [##########
  [main.cpp:3866] outer: 5530.022 ns   [....      
    [code]: 3872.883 ns   [...       
    [main.cpp:3868] inner: 1653.139 ns   [.         
  [main.cpp:3856] control: 1661.779 ns   [.         
  [main.cpp:3876] beginend: 1645.466 ns   [.         
The profiler had an overhead of almost 4 microseconds. When you’re dealing with functions that are called thousands of times a second, you need to be aware of code speed on the scale of nanoseconds, and this profiler would completely ruin the code. At first, I thought it was my fault, but none of my tweaks seemed to have any measureable effect on the speed whatsoever. On a whim, I decided to comment out the actual _querytime function that was calling QueryPerformanceCounter, then run an external profiler on it.
Average control: 35 ns
What?! Well no wonder my tweaks weren’t doing anything, all my code was taking a scant 35 nanoseconds to run. The other 99.9% of the time was spent on that single, stupid call, which also happened to be the one call I couldn’t get rid of. However, that isn’t the end of the story; _querytime() looks like this:
void cHighPrecisionTimer::_querytime(unsigned __int64* _pval)
{
  DWORD procmask=_getaffinity(); 
  HANDLE curthread = GetCurrentThread();
  SetThreadAffinityMask(curthread, 1);
  
  QueryPerformanceCounter((LARGE_INTEGER*)_pval);
  
  SetThreadAffinityMask(curthread, procmask);
}

Years ago, it was standard practice to wrap all calls to QueryPerformanceCounter in a CPU core mask to force it to operate on a single core due to potential glitches in the BIOS messing up your calculations. Microsoft itself had recommended it, and you could find this same code in almost any open-source library that was taking measurements. It turns out that this is no longer necessary:

**Do I need to set the thread affinity to a single core to use QPC?**

No. For more info, see Guidance for acquiring time stamps. This scenario is neither necessary nor desirable.

I couldn’t get rid of the QueryPerformanceCounter call itself, but I could get rid of all that other crap it was doing. I commented it out, and voilà! The overhead had been reduced to a scant 340 nanoseconds, only a tenth of what it had been before. I’m still spending 90% of my calculation time calling that stupid function, but there isn’t much I can do about that. Either way, it was a good reminder about the entire reason for using a profiler - bottlenecks tend to crop up in the most unexpected places.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 142416 µs   [##########
  [code]: 56575.4 µs   [##########
  [main.cpp:3866] outer: 515.43 ns   [....      
    [code]: 343.465 ns   [...       
    [main.cpp:3868] inner: 171.965 ns   [.         
  [main.cpp:3876] beginend: 173.025 ns   [.         
  [main.cpp:3856] control: 169.954 ns   [.         

I also tried adding standard deviation measurements, but that ended up giving me ludicrous values of 342±27348 ns, which isn’t very helpful. Apparently there’s quite a lot of variance in function call times, so much so that while the averages always tend to be the same over time, the statistical variance goes through the roof. This is probably why most profilers don’t include the standard deviation. I was able to add in accurate unprofiled code measurements, though, and the profiler uses a dynamic triple magnitude method of displaying how much time a function takes.


The Problem With Photorealism


Many people assume that modern graphics technology is now capable of rendering photorealistic video games. If you define photorealistic as any still frame is indistinguishable from a real photo, then we can get pretty close. Unfortunately, the problem with video games is that they are not still frames - they move.

What people don’t realize is that modern games rely on faking a lot of stuff, and that means they only look photorealistic in a very tight set of circumstances. They rely on you not paying close attention to environmental details so you don’t notice that the grass is actually just painted on to the terrain. They precompute environmental convolution maps and bake ambient occlusion and radiance information into level architecture. You can’t knock down a building in a game unless it is specifically programmed to be breakable and all the necessary preparations are made. Changes in levels are often scripted, with complex physical changes and graphical consequences being largely precomputed and simply triggered at the appropriate time.

Modern photorealism, like the 3D graphics of ages past, is smoke and mirrors, the result of very talented programmers and artists using tricks of the eye to convince you that a level is much more detailed and interactive than it really is. There’s nothing wrong with this, but we’re so good at doing it that people think we’re a heck of a lot closer to photorealistic games then we really are.

If you want to go beyond simple photorealism and build a game that feels real, you have to deal with a lot of extremely difficult problems. Our best antialiasing methods are perceptual, because doing real antialiasing is prohibitively expensive. Global illumination is achieved by deconstructing a level’s polygons into an octree and using the GPU to cubify moving objects in realtime. Many advanced graphical techniques in use today depend on precomputed values and static geometry. The assumption that most of the world is probably going to stay the same is a powerful one, and enables huge amounts of optimization. Unfortunately, as long as we make that assumption, none of it will ever feel truly real.

Trying to build a world that does not take anything for granted rapidly spirals out of control. Where do you draw the line? Does gravity always point down? Does the atmosphere always behave the same way? Is the sun always yellow? What counts as solid ground? What happens when you blow it up? Is the object you’re standing on even a planet? Imagine trying to code an engine that can take into account all of these possibilities in realtime. This is clearly horrendously inefficient, and yet there is no other way to achieve a true dynamic environment. At some point, we will have to make assumptions about what will and will not change, and these sometimes have surprising consequences. A volcanic eruption, for example, drastically changes the atmospheric composition and completely messes up the ambient lighting and radiosity.

Ok, well, at least we have dynamic animations, right? Wrong. Almost all modern games still use precomputed animations. Some fancy technology can occasionally try to interpolate between them, but that’s about it. We have no reliable method of generating animations on the fly that don’t look horrendously awkward and stiff. It turns out that trying to calculate a limb’s shortest path from point A to point B while avoiding awkward positions and obstacles amounts to solving the Euler-Lagrange equation over an n-dimensional manifold! As a result, it’s incredibly difficult to create smooth animations, because our ability to fluidly shift from one animation to another is extremely limited. This is why we still have weird looking walk animations and occasional animation jumping.

The worst problem, however, is that of content creation. The simple fact is that at photorealistic detail levels, it takes way too long for a team of artists to build a believable world. Even if we had super amazing 3D modelers that would allow an artist to craft any small object in a matter of minutes (which we don’t), artists aren’t machines. Things look real because they have a history behind them, a reason for their current state of being. We can make photorealistic CGI for movies because each scene is scripted and has a well-defined scope. If you’re building GTA V, you can’t somehow manage to come up with three hundred unique histories for every single suburban house you’re building.

Even if we did invent a way to render photorealistic graphics, it would all be for naught until we figured out a way to generate obscene amounts of content at incredibly high levels of detail. Older games weren’t just easier to render, they were easier to make. There comes a point where no matter how many artists you hire, you simply can’t build an expansive game world at a photorealistic level of detail in just 3 years.

People always talk about realtime raytracing as the holy grail of graphics programming without realizing just what is required to take advantage of it. Photorealism isn’t just about processing power, it’s about content.


Google's Decline Really Bugs Me


Google is going down the drain.

That isn’t to say they aren’t fantastically successful. They are. I still use their products, mostly because I don’t put things on the internet I don’t want other people to find, and I’m not female, so I don’t have to worry about misogynists stalking me. They still make stupendous amounts of money and pump out some genuinely good software. They still have the best search engine. Like Microsoft, they’ll be a force to be reckoned with for many decades to come.

Google, however, represented an ideal. They founded the company with the motto “Don’t Be Evil”, and the unspoken question was, how long would this last? The answer, oddly enough, was “until Larry Page took over”.

In its early years, Google unleashed the creativity of the brilliant people it hired to the world and came up with a slew of fantastic products that were a joy to use. Google made huge contributions to the open-source world and solved scalability problems with an elegance that has yet to be surpassed. They famously let engineers use 20% of their time to pursue their own interests, and the result was an unstoppable tidal wave of innovation. Google was, for a brief moment, a shining beacon of hope, a force of good in a bleak world of corporations only concerned with maximizing profit.

Then Larry Page became CEO. Larry Page worshiped Steve Jobs, who gave him a bunch of bad advice centered around maximizing profit. The result was predictable and catastrophic, as the entire basis of what had made Google so innovative was destroyed for the sake of maximizing profit. Now it’s just another large company - only concerned about maximizing profit.

Google was a company that, for a time, I loved. To me, they represented the antithesis of Microsoft, a rebellion against a poisonous corporate culture dominated by profiteering that had no regard for its users. Google was just a bunch of really smart people trying to make the world a better place, and for a precious few years, they succeeded - until it all came tumbling down. Like an artist whose idol has become embroiled in a drug abuse scandal, I have lost my guiding light.

Google was largely the reason I wanted to start my own company, even if college kept me from doing so. As startup culture continued to suck the life out of silicon valley, I held on to Google as an ideal, an example of the kind of company I wanted to build instead of a site designed to sort cat photos. A company that made money because it solved real problems better than everyone else. A company that respected good programming practices, using the right tool for the job, and the value of actually solving a problem instead of just throwing more code at it.

Google was a company that solved problems first, and made money second.

Now, it has succumbed to maximizing stock price for a bunch of rich wall street investors who don’t care about anything other than filling their own pockets with as much cash as they possibly can. Once again, the rest of the world is forced to sit around, waiting until an investor accidentally makes the world a better place in the process of trying to make as much money as possible.

Most people think this is the only way to get things done. For a precious few years, I could point to Google and say otherwise. Now, it has collapsed, and its collapse has made me doubt my own resolve. If Google, of all companies, couldn’t maintain that idealistic vision, was it even possible?

Google gave me a reason to believe that humanity could do better. That we could move past a Wall Street that has become nothing more than a rotting cesspool of greed and corruption.

Now, Google has fallen, along with the ideal it encompassed. Is there a light at the end of the tunnel? Or is it a train, a force of reality come to remind us that no matter how much we reach for utopia, we will be sentenced to drown in our own greed?


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009